Hacker News new | past | comments | ask | show | jobs | submit login
State-space models can learn in-context by gradient descent (arxiv.org)
86 points by dsalaj 4 days ago | hide | past | favorite | 36 comments





> can reproduce the outputs of an implicit linear model with least squares loss after one step of gradient descent.

Makes you wonder if we're training LLMs the hard way. For example, if computers had been invented before Calculus, we'd have been using "Numerical Integration" (iterating the differential squares to sum up areas, etc) and "Numerical Differentiation" (ditto for calculating slopes).

So I wonder if we're simply in a pre-Calculus-like phase of NN/Perceptrons, where we haven't yet realized there's a mathematical way to "solve" a bunch of equations simultaneously and arrive at the best (or some local minima) model weights for a given NN architecture and set of training data.

From a theoretical standpoint it IS a black box problem like this where the set of training data goes in, and an array of model weights comes out. If I were to guess I'd bet there'll be some kind of "random seed" we can add as input, and for each seed we'll get a different (local minima/maxima for model weights).

But I'm not a mathematician and there may be some sort of PROOF that what I just said can definitely never be done?


NNs have complex non-convex loss functions that don't admit a closed-form solution. Even for small models, it can be shown that it's an NP-complete problem. In fact, even for linear regression (least squares), which has a closed-form solution, it can be computationally cheaper to run gradient descent since finding the closed form solution requires you to calculate and invert a large matrix (X^T X).

Which in some sense is intuitive: any closed form that can model general computation to any significant degree should be hard: if it weren't, you could encode your NP-complete problem into it, solve it in an efficient closed form, and collect your Fields medal for proving P = NP.

Intuition is often wrong, even for high IQ people, like your average HN user. lol.

For a long time it was intuitive that you cannot find the area under arbitrary functions, but then Calculus was invented, showing us a new "trick", that was previously unfathomable, and indistinguishable from magic.

I'm just not sure mankind's understanding of Mathematics is out of new "tricks" to be learned. I think there are types of algorithms today that look like the require N-iterations to get X-precision, when in reality we might be able to divide N by some factor, for some algorithms, and still end up with X-precision.


> I'm just not sure mankind's understanding of Mathematics is out of new "tricks" to be learned.

This is my opinion also as it relates to AI/ANN. Things I read about how scientists see the brain shifting due to learning (minimum energy of network type stuff) seem like the brain has some functions figured out that we haven't identified yet.

Maybe it's math already fully understood just not applied well to ANN's, but maybe there's some secret sauce in there.


One reason to believe there's even new low hanging fruit (that doesn't even require new math) is how simple and trivial the "Attention Heads" structure of the Transformer architecture really is. It's not advanced at all. It was just a great ideal that panned out that pretty much any creative AI researcher could've thought up after smokin' a joint. lol. I mean someone could do trivial experiments with different Perceptron network structuring and end up revolutionizing the world.

I think things are gonna get interesting real quick once LLMs themselves start "self experimenting" with writing code for different architectures.


Thanks for that great clarification. I had seen all those words before, but just not in that particular order. haha.

Maybe our only hope of doing LLM training runs in a tiny amount of time will be from Quantum Computing or even Photonic (wave-based) Computing.


There are actually neural networks with explicit optimization layers but I don’t think these have really had much success.

I just have a hunch we're in early days still even with Transformers architectures. The MLP (Perceptron) is such a simple mathematical structure and mostly doing linear stuff (tons of multiplications, then a few adds, and a squashing-type activation function), plus the attention heads add-on from the Transformers paper too, of course (and other minor things) but ultimately it's a very easy to understand data structure so it's hard for me to believe there's not massive leaps and bounds that we can take to gain orders of magnitude more performance just like the leap that the Transformers paper had.

> We can take to gain orders of magnitude more performance just like the leap that the Transformers paper had.

Afaik the most important benefit of transformers aren't their “performance” (in the sense of ability to perform their tasks) but their scalability which come from their ability to be trained and evaluated efficiently on big GPU clusters, which isn't something you can do with recurrent neural networks.

And then, if I understood correctly, the benefit of state-space models being that you can train them in parallel and run them in a recurrent fashion, making inference cheaper than transformers especially when context size grow.


The biggest thing I had understood about the Transformers Paper (Attention is all you Need) is how the "attention heads" vectors are wired up in such a way as to allow words to be "understood" in the proper context. In other words "see spot run" is different from "run a computer program" has dramatically different but specific context for the word "run".

It was also my understanding that without those attention heads even the scaling up to current parameter sizes we have to day would not have ended up with the level of emergent intelligence that shocked the world with GPT 3.5. We needed both very large models and words put into semantic context in semantic space.


Attention heads existed before Transformers, they where used in recurrent neural networks (RNN) to improve their performance. The paper is called “Attention is all you need” because transformers keep the attention head while discarding the RNN part entirely.

Getting rid of RNN vastly improved training scalability and allowed big players to start training enormous models on even more enormous training set in ways that weren't possible with a RNN AFAIK.


When discussing "Attention Heads" in the context of the Transformers Paper, there's no need to put the word "Self" in front of it, as in "Self-Attention". That's the context in which I used the word Attention above. Something similar to self-attention had pre-existed this paper, but not actual self-attention.

You're right that getting rid of "Recurrence" was another innovation, but removing it was probably more of a hack to make things parallelizable, than something that was architecturally justifiable from first principles (like self-attention is), because there's definite "power" in Recurrence (making it desirable), but it's just too costly to run that in LLMs because of CPU cycles.


> removing it was probably more of a hack to make things parallelizable

But that's the entire point of it. Transformer-based LLM are “more intelligent” just because you can make them bigger and train them on bigger datasets because of this parallelization.


It's not just about size. Self-Attention is every bit as important as large size, because if we had the current large size, but without Self-Attention we wouldn't have the emergent intelligence. Also "size" isn't even a new innovation. Self-Attention was a new innovation.

This doesn't match with the common knowledge on the topic, which is that model size is more important than the architecture. And training size is even more important, which is why single digit billion parameters are strongers than hundreds-of-billion ones from several years early when “Chinchilla optimal training” was in fashion.

SSM are literally the proof that all that really matters is training scalability.

The Universal approximation theorem doesn't care about the architecture after all.


If you parse my words a bit more carefully, you'll realize to test my claim there's a simple thought experiment (or real experiment) you can do which is this:

Take our "current large size" (my words from last post) LLMs, as they are currently today, and then simply remove the Self-Attention wiring, and see if that destroys the emergent intelligence aspect or not. I claim it would. But at the same time this doesn't mean you can just stick Self-Attention onto a small model and expect intelligence to once again emerge.


You are wildly overestimating the “emergent capabilities” of current models, and underestimate alternative architectures's (namely SSM) performance at the same size.

Also, performance of the modern “small” models show that your last sentence isn't really true either.


> wildly overestimating the “emergent capabilities”

How could I be "overestimating" the emergent capabilities when I never even quantified those capabilities other than to call them "emergent" and impressive?

> “small” models show that your last sentence isn't true either.

I never said that even a perfect architecture would make small models "intelligent". However to the extent that even smaller LLMs can exhibit surprising capabilities, that's more evidence IN FAVOR OF everything I've said, not evidence against.

EDIT: But in that last sentence (of prior reply) by "small" what I meant was genuinely small, meaning non-LLM, and you seem to have interpreted it as "a smaller LLM"


Even 1B parameters model show “impressive capabilities” for anyone not accustomed to the current state of the art. And there are plenty of relatively small models that perform as well as ChatGPT 3.5 when it was first released and felt like magic.

“All” that was needed to get there was “just” feeding it more data. The fact that we were actually able to train billion parameters models on multiple trillion tokens is the key property of the transformers, there's no magic beyond that (it's already cool enough though): it's not so much that they are more intelligent, it's simply that with them we can brute-force in a scalable fashion.


Yes even the original Transformers model had only millions of parameters and nonetheless showed "impressive capabilities" because it also had Self-Attention.

If you know of any models that have had success (even at the GPT-2 level) without Self-Attention, I'd be interested to know what they are, because I don't know of any.


RWKV.

There aren't many multi-billion-parameters non-transformer models because of path dependence, but that doesn't mean that only transformers can achieve this kind of results.


My statements (which you disagreed with, without exception) haven't been about Transformers v.s. non-Transformers. Everything above has been about the importance of the Self-Attention part of it. We could remove Self-Attention from Transformers and still have a functional (but dumb) NN, and that was my point.

Your position was that the Self-Attention is a less important part (because UAT, yadda yadda), and my position was that it's the key ingredient. Every statement above that I made, that you called wrong, was correct. lol.


You are moving the goalpost. The discussion has always been about transformers vs non transformers.

You claimed that self attention was needed to achieve the level of intelligence that we've seen with GPT 3.5:

> without those attention heads even the scaling up to current parameter sizes we have to day would not have ended up with the level of emergent intelligence that shocked the world with GPT 3.5. (Verbatim quote from you https://news.ycombinator.com/item?id=41986010)

This is the claim I've been disputing, by responding that the key feature of the intelligence of tranformer models come from their scalability. And now that we have alternative that scale equally well (SSM and RWKV) unsurprisingly we see them achieve the same level of reasoning abilities.

> Every statement above that I made, that you called wrong, was correct. lol.

Well, except the one quoted above at least…


In the quote you're calling wrong (41986010), you're interpreting "scaling up" as "scaling up, including changing architecture". Scaling up transformers just means scaling up transformers, and keeping everything else the same. In other words you're interpreting "parameter size" as "parameter size, independent of architecture", and I meant parameter size of a Transformer (in the context of with v.s. without Self-Attention).

There's a whole lotta certainty about even intractable integrals which is lacking in the case of neural nets grappling with noisy incomplete real world data.

There's at least 100 different equally likely interpretations of that particular sequence of words you just wrote.

> We show that SSMs with local self-attention, a form of input-dependent input processing, can perform in-context learning analogously to transformers, i.e. through gradient descent steps on an implicit linear regression problem.

I don't understand. The benefit of SSMs is better scalability than self-attention. Now this adds self-attention back?


It adds a very local sliding window attention, the context is only 3 adjacent frames per step. They need the access to adjacent frames to show the implicit model gradient computation but I didn't yet follow the derivation for why this is so.

These papers don’t explain how pertained LLMs learn in-context, because the simplified models in these papers are either pretrained for the same task that’s tested in-context, or the weights are handpicked by humans to do GD at inference time.

See this video for a good discussion: https://youtu.be/-yo2672UikU


>Our key insight is that the diagonal linear recurrent layer can act as a gradient accumulator

So they're sort of reinventing the discrete-time differentiator from signal processing, but parameterized neurally?


Converging slowly on Kalman filters, calling it now.

I'd love to see SSMs replace transformers but adapting them to non-causal, 2D+ inputs doesn't seem that straightforward.

Is there a non-autoregressive future?


So, I'm just a layman when it comes to AI/ML, but I do understand computability — what's possible to do with a given machine, and how we can build higher-computational-power primitives out of lower-computational-power primitives by plugging those primitives together with "glue" like parallel feed-forward chains (e.g. an ALU adder's carry bits) and loops over static sub-states of execution.

My own mental model for what Transformers must necessarily be doing, in order to be able to compute what they compute, given:

1. the primitives they're made of (for Transformers: matmul a learned matrix; vector-add a learned bias vector; normalize; softmax)

2. what those primitives can compute over a single layer

3. the low-ish total number of layers in a Transformer model

...is that they were already effectively "state space models" in practice. So this doesn't really surprise me!

(To be explicit, my assertion is that, for a given latent space between layers N and N+1 in a Transformer model, that latent space encodes a set of state variables [think CPU registers] used by the Nth serial computation steps of an arbitrary set of learned algorithms — where these algorithms are limited to those where every computation step is possible to encode in the form of a fused-matmul-plus-vadd, such that the algorithm itself can be learned as a depthwise-extruded sequence of weights across the layers; and where the learned algorithms can and do share state variables, both as inputs and as outputs; and where these state variables are all attenuated by an activation probability [in a Transformer: attention] such that the algorithms' outputs form a pre-multiplied conditional probability of the output given the confidence of the inputs — in turn such that the same state variable can be a low-confidence output for one algorithm, and a high-confidence output for another algorithm, and the high-confidence component of the output will swamp the low-confidence output.)


Your intuition is, I think, pretty close to accurate. See this paper from earlier this year:

> While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.

https://arxiv.org/abs/2405.21060


Deep state-space models (Deep SSMs) have shown capabilities for in-context learning on autoregressive tasks, similar to transformers. However, the architectural requirements and mechanisms enabling this in recurrent networks remain unclear. This study demonstrates that state-space model architectures can perform gradient-based learning and use it for in-context learning.



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: