Hacker Newsnew | past | comments | ask | show | jobs | submit | famouswaffles's commentslogin

Science often advances by accumulation, and it’s true that multiple people frequently converge on similar ideas once the surrounding toolkit exists. But “it becomes obvious” is doing a lot of work here, and the history around relativity (special and general) is a pretty good demonstration that it often doesn’t become obvious at all, even to very smart people with front-row seats.

Take Michelson in 1894: after doing (and inspiring) the kind of precision work that should have set off alarm bells, he’s still talking like the fundamentals are basically done and progress is just “sixth decimal place” refinement.

"While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals." - Michelson 1894

The Michelson-Morley experiments weren't obscure, they were famous, discussed widely, and their null result was well-known. Yet for nearly two decades, the greatest physicists of the era proposed increasingly baroque modifications to existing theory rather than question the foundational assumption of absolute time. These weren't failures of data availability or technical skill, they were failures of imagination constrained by what seemed obviously true about the nature of time itself.

Einstein's insight wasn't just "connecting dots" here, it was recognizing that a dot everyone thought was fixed (the absoluteness of simultaneity) could be moved, and that doing so made everything else fall into place.

People scorn the 'Great Man Hypothesis' so much they sometimes swing too much in the other direction. The 'multiple discovery' pattern you cite is real but often overstated. For Special Relativity, Poincaré came close, but didn't make the full conceptual break. Lorentz had the mathematics but retained the aether. The gap between 'almost there' and 'there' can be enormous when it requires abandoning what seems like common sense itself.


>Unfortunately, none of that has anything to do with what LLMs are doing. The LLM is not thinking about concepts and then translating that into language. It is imitating what it looks like to read people doing so and nothing more.

'Language' is only the initial and final layers of a Large Language Model. Manipulating concepts is exactly what they do, and it's unfortunate the most obstinate seem to be the most ignorant.


They do not manipulate concepts. There is no representation of a concept for them to manipulate.

It may, however, turn out that in doing what they do, they are effectively manipulating concepts, and this is what I was alluding to: by building the model, even though your approach was through tokenization and whatever term you want to use for the network, you end up accidentally building something that implicitly manipulates concepts. Moreover, it might turn out that we ourselves do more of this than we perhaps like to think.

Nevertheless "manipulating concepts is exactly what they do" seems almost willfully ignorant of how these systems work, unless you believe that "find the next most probable sequence of tokens of some length" is all there is to "manipulating concepts".


>They do not manipulate concepts. There is no representation of a concept for them to manipulate.

Yes, they do. And of course there is. And there's plenty of research on the matter.

>It may, however, turn out that in doing what they do, they are effectively manipulating concepts

There is no effectively here. Text is what goes in and what comes out, but it's by no means what they manipulate internally.

>Nevertheless "manipulating concepts is exactly what they do" seems almost willfully ignorant of how these systems work, unless you believe that "find the next most probable sequence of tokens of some length" is all there is to "manipulating concepts".

"Find the next probable token" is the goal, not the process. It is what models are tasked to do yes, but it says nothing about what they do internally to achieve it.


please pass on a link to a solid research paper that supports the idea that to "find the next probable token", LLM's manipulate concepts ... just one will do.

Revealing emergent human-like conceptual representations from language prediction - https://www.pnas.org/doi/10.1073/pnas.2512514122

Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task - https://openreview.net/forum?id=DeG07_TcZvT

On the Biology of a Large Language Model - https://transformer-circuits.pub/2025/attribution-graphs/bio...

Emergent Introspective Awareness in Large Language Models - https://transformer-circuits.pub/2025/introspection/index.ht...


Thanks for that. I've read the two Lindsey papers before. I think these are all interesting, but they are also what used to be called "just-so stories". That is, they describe a way of understanding what the LLM is doing, but do not actually describe what the LLM is doing.

And this is OK and still quite interesting - we do it to ourselves all the time. Often it's the only way we have of understanding the world (or ourselves).

However, in the case of LLMs, which are tools that we have created from scratch, I think we can require a higher standard.

I don't personally think that any of these papers suggest that LLMs manipulate concepts. They do suggest that the internal representation after training is highly complex (superposition, in particular), and that when inputs are presented, it isn't unreasonable to talk about the observable behavior as if it involved represented concepts. It is useful stance to take, similar to Dennett's intentional stance.

However, while this may turn out to be how a lot of human cognition works, I don't think it is what is the significant part of what is happening when we actively reason. Nor do I think it corresponds to what most people mean by "manipulate concepts".

The LLM, despite the prescence of "features" that may correspond to human concepts, is relentlessly forward-driving: given these inputs, what is my output? Look at the description in the 3rd paper of the arithmetic example. This is not "manipulating concepts" - it's a trick that often gets to the right answer (just like many human tricks used for arithmetic, only somewhat less reliable). It is extremely different, however, from "rigorous" arithmetic - the stuff you learned when you somewhere between age 5 and 12 perhaps - that always gives the right answer and involves no pattern matter, no inference, no approximations. The same thing can be said, I think, about every other example in all 4 papers, to some degree or another.

What I do think is true (and very interesting) is that it seems somewhere between possible and likely that a lot more human cognition than we've previously suspected uses similar mechanisms as these papers are uncovering/describing.


>That is, they describe a way of understanding what the LLM is doing, but do not actually describe what the LLM is doing.

I’m not sure what distinction you’re drawing here. A lot of mechanistic interpretability work is explicitly trying to describe what the model is doing in the most literal sense we have access to: identifying internal features/circuits and showing that intervening on them predictably changes behavior. That’s not “as-if” gloss; it’s a causal claim about internals.

If your standard is higher than “we can locate internal variables that track X and show they causally affect outputs in X-consistent ways,” what would count as “actually describing what it’s doing”?

>However, in the case of LLMs, which are tools that we have created from scratch, I think we can require a higher standard.

This is backwards. We don’t “create them from scratch” in the sense relevant to interpretability. We specify an architecture template and a training objective, then we let gradient descent discover a huge, distributed program. The “program” is not something we wrote or understand. In that sense, we’re in a similar epistemic position as neuroscience: we can observe behavior, probe internals, and build causal/mechanistic models, without having full transparency.

So what does “higher standard” mean here, concretely? If you mean “we should be able to fully enumerate a clean symbolic algorithm,” that’s not a standard we can meet even for many human cognitive skills, and it’s not obvious why that should be the bar for “concept manipulation.”

>I don't personally think that any of these papers suggest that LLMs manipulate concepts. They do suggest that the internal representation after training is highly complex (superposition, in particular), and that when inputs are presented, it isn't unreasonable to talk about the observable behavior as if it involved represented concepts. It is useful stance to take, similar to Dennett's intentional stance.

You start with “there is no representation of a concept,” but then concede “features that may correspond to human concepts.” If those features are (a) reliably present across contexts, (b) abstract over surface tokens, and (c) participate causally in producing downstream behavior, then that is a representation in the sense most people mean in cognitive science. One of the most frustrating things about these sorts of discussions is the meaningless semantic games and goalpost shifting.

>The LLM, despite the prescence of "features" that may correspond to human concepts, is relentlessly forward-driving: given these inputs, what is my output?

Again, that’s a description of the objective, not the internal computation. The fact that the training loss is next-token prediction doesn’t imply the internal machinery is only “token-ish.” Models can and do learn latent structure that’s useful for prediction: compressed variables, abstractions, world regularities, etc. Saying “it’s just next-token prediction” is like saying “humans are just maximizing inclusive genetic fitness,” therefore no real concepts. Goal ≠ mechanism.

> Look at the description in the 3rd paper of the arithmetic example. This is not "manipulating concepts" - it's a trick that often gets to the right answer

Two issues:

1. “Heuristic / approximate” doesn’t mean “not conceptual.” Humans use heuristics constantly, including in arithmetic. Concept manipulation doesn’t require perfect guarantees; it requires that internal variables encode and transform abstractions in ways that generalize.

2. Even if a model is using a “trick,” it can still be doing so by operating over internal representations that correspond to quantities, relations, carry-like states, etc. “Not a clean grade-school algorithm” is not the same as “no concepts.”

>Rigorous arithmetic… always gives the right answer and involves no pattern matching, no inference…

“Rigorous arithmetic” is a great example of a reliable procedure, but reliability doesn’t define “concept manipulation.” It’s perfectly possible to manipulate concepts using approximate, distributed representations, and it’s also possible to follow a rigid procedure with near-zero understanding (e.g., executing steps mechanically without grasping place value).

So if the claim is “LLMs don’t manipulate concepts because they don’t implement the grade-school algorithm,” that’s just conflating one particular human-taught algorithm with the broader notion of representing and transforming abstractions.


> You start with “there is no representation of a concept,” but then concede “features that may correspond to human concepts.” If those features are (a) reliably present across contexts, (b) abstract over surface tokens, and (c) participate causally in producing downstream behavior, then that is a representation in the sense most people mean in cognitive science. One of the most frustrating things about these sorts of discussions is the meaningless semantic games and goalpost shifting.

I'll see if I can try to explain what I mean here, because I absolutely don't believe this is shifting the goal posts.

There are a couple of levels of human cognition that are particularly interesting in this context. One is the question of just how the brain does anything at all, whether that's homeostasis, neuromuscular control or speech generation. Another is how humans engage in conscious, reasoned thought that leads to (or appears to lead to) novel concepts. The first one is a huge area, better understood than the second though still characterized more by what we don't know than what we do. Nevertheless, it is there that the most obvious parallels with e.g. the Lindsey papers can be found. Neural networks, activation networks and waves, signalling etc. etc. The brain receives (lots of) inputs, generates responses including but not limited to speech generation. It seems entirely reasonable to suggest that maybe our brains, given a somewhat analogous architecture at some physical level to the one used for LLMs, might use similar mechanisms as the latter.

However, nobody would say that most of what the brain does involves manipulating concepts. When you run from danger, when you reach up grab something from a shelf, when you do almost anything except actual conscious reasoning, most of the accounts of how that behavior arises from brain activity does not involve manipulating concepts. Instead, we have explanations more similar to those being offered for LLMs - linked patterns of activations across time and space.

Nobody serious is going to argue that conscious reasoning is not built on the same substrate as unconscious behavior, but I think that most people tend to feel that it doesn't make sense to try to shoehorn it into the same category. Just as it doesn't make much sense to talk about what a text editor is doing in terms of P and N semiconductor gates, or even just logic circuits, it doesn't make much sense to talk about conscious reasoning in terms of patterns of neuronal activation, despite the fact that in both cases, one set of behavior is absolutely predicated on the other.

My claim/belief is that there is nothing inside an LLM that corresponds even a tiny bit to what happens when you are asked "What is 297 x 1345?" or "will the moon be visible at 8pm tonight?" or "how does writer X tackle subject Y differently than writer Z?". They can produce answers, certainly. Sometimes the answers even make significant sense or better. But when they do, we have an understanding of how that is happening that does not require any sense of the LLM engaging in reasoning or manipulating concepts. And because of that, I consider attempts like Lindsey's to justify the idea that LLMs are manipulating concepts to be misplaced - the structures Lindsey et al. are describing are much more similar to the ones that let you navigate, move, touch, lift without much if any conscious thought. They are not, I believe, similar to what is going on in the brain when you are asked "do you think this poem would have been better if it was a haiku?" and whatever that thing is, that is what I mean by manipulating concepts.

> Saying “it’s just next-token prediction” is like saying “humans are just maximizing inclusive genetic fitness,” therefore no real concepts. Goal ≠ mechanism.

No. There's a huge difference between behavior and design. Humans are likely just maximizing genetic fitness (even though that's really a concept, but that detail is not worth arguing about here), but that describes, as you note, a goal not a mechanism. Along the way, they manifest huge numbers of sub-goal directed behaviors (or, one could argue quite convincingly, goal-agnostic behaviors) that are, broadly speaking, not governed by the top level goal. LLMs don't do this. If you want to posit that the inner mechanisms contain all sorts of "behavior" that isn't directly linked to the externally visible behavior, be my guest, but I just don't see this as equivalent. What humans visibly, mechanistically do covers a huge range of things; LLMs do token prediction.


>Nobody would say that most of what the brain does involves manipulating concepts. When you run from danger, when you reach up grab something from a shelf, when you do almost anything except actual conscious reasoning, most of the accounts of how that behavior arises from brain activity does not involve manipulating concepts.

This framing assumes "concept manipulation" requires conscious, deliberate reasoning. But that's not how cognitive science typically uses the term. When you reach for a shelf, your brain absolutely manipulates concepts - spatial relationships, object permanence, distance estimation, tool affordances. These are abstract representations that generalize across contexts. The fact that they're unconscious doesn't make them less conceptual

>My claim/belief is that there is nothing inside an LLM that corresponds even a tiny bit to what happens when you are asked "What is 297 x 1345?" or "will the moon be visible at 8pm tonight?"

This is precisely what the mechanistic interpretability work challenges. When you ask "will the moon be visible tonight," the model demonstrably activates internal features corresponding to: time, celestial mechanics, geographic location, lunar phases, etc. It combines these representations to generate an answer.

>But when they do, we have an understanding of how that is happening that does not require any sense of the LLM engaging in reasoning or manipulating concepts.

Do we? The whole point of the interpretability research is that we don't have a complete understanding. We're discovering that these models build rich internal world models, causal representations, and abstract features that weren't explicitly programmed. If your claim is "we can in principle reduce it to matrix multiplications," sure, but we can in principle reduce human cognition to neuronal firing patterns too.

>They are not, I believe, similar to what is going on in the brain when you are asked "do you think this poem would have been better if it was a haiku?" and whatever that thing is, that is what I mean by manipulating concepts.

Here's my core objection: you're defining "manipulating concepts" as "whatever special thing happens during conscious human reasoning that feels different from 'pattern matching.'" But this is circular and unfalsifiable. How would we ever know if an LLM (or another human, for that matter) is doing this "special thing"? You've defined it purely in terms of subjective experience rather than functional or mechanistic criteria.

>Humans are likely just maximizing genetic fitness... but that describes, as you note, a goal not a mechanism. Along the way, they manifest huge numbers of sub-goal directed behaviors... that are, broadly speaking, not governed by the top level goal. LLMs don't do this.

LLMs absolutely do this, it's exactly what the interpretability research reveals. LLMs trained on "token prediction" develop huge numbers of sub-goal directed internal behaviors (spatial reasoning, causal modeling, logical inference) that are instrumentally useful but not explicitly specified, precisely the phenomenon you claim only humans exhibit. And 'token prediction' is not about text. The most significant advances in robotics in decades are off the back of LLM transformers. 'Token prediction' is just the goal, and I'm tired of saying this for the thousandth time.

https://www.skild.ai/blogs/omni-bodied


HN comment threads are really not the right place for discussions like this.

> Here's my core objection: you're defining "manipulating concepts" as "whatever special thing happens during conscious human reasoning that feels different from 'pattern matching.'" But this is circular and unfalsifiable. How would we ever know if an LLM (or another human, for that matter) is doing this "special thing"? You've defined it purely in terms of subjective experience rather than functional or mechanistic criteria.

I think your core objection is well aligned to my own POV. I am not claiming that the subjective experience is the critical element here, but I am claiming that whatever is going on when we have the subjective experience of "reasoning" is likely to be different (or more specifically, more usefully described in different ways) than what is happening in LLMs and our minds when doing something else.

How would we ever know? Well the obvious answer is more research into what is happening in human brains when we reason and comparing that to brain behavior at other times.

I don't think it's likely to be productive to continue this exchange on HN, but if you would like to continue, my email address is in my profile.


@PaulDavisThe1st I'd love to hear your take on these papers.

Provided above.


The Universe (which others call the Golden Gate Bridge), is composed of an indefinite and perhaps infinite series of spans.

Right, what they needed was billions of years of brute force and trial and error.

His point is that we can't train a Gemini 3/Claude 4.5 etc model because we don't have the data to match the training scale of those models. There aren't trillions of tokens of digitized pre-1900s text.

>But OpenAI doesn't have tiny COGS: inference is expensive as fuck.

No, inference is really cheap today, and people saying otherwise simply have no idea what they are talking about. Inference is not expensive.


Clearly not cheap enough.

> Even at $200 a month for ChatGPT Pro, the service is struggling to turn a profit, OpenAI CEO Sam Altman lamented on the platform formerly known as Twitter Sunday. "Insane thing: We are currently losing money on OpenAI Pro subscriptions!" he wrote in a post. The problem? Well according to @Sama, "people use it much more than we expected."

https://www.theregister.com/2025/01/06/altman_gpt_profits/


So just raise the price or decrease the cost per token internally.

Altman also said 4 months ago:

  Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
https://simonwillison.net/2025/Aug/17/sam-altman/


>Gemini is catching up to chatpt from a MAU perspective

It is far behind, and GPT hasn't exactly stopped growing either. Weekly Active Users, Monthly visits...Gemini is nowhere near. They're comfortably second, but second is still well below first.

>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there

Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.


we're curious what your source is



ChatGPT - 5.8b visits - -5.2% MoM

Gemini - 1.4b visits - +14.4% MoM

Yeah, ChatGPT is still more popular, but this does not show Gemini struggling exactly.


doesn't seem accurate since gemini has many entry points


Entry points ? The visits are accurate for the website and app. If you're talking about AI overviews, then that's meaningless for reasons I've already explained.


I do understand why it makes it very hard to compare but it's certainly not meaningless. Google's AI overviews are pretty much the only way that I use AI.


I mean we're all talking about how Google is 'catching up' and 'taking over' Open AI right ? In that case, it genuinely is meaningless. AI Overviews, even if it had the usage OP assumes, is not a threat to Open AI or chatGPT. People use chatGPT for a lot of different things, and AI overviews only handles (rather poorly in my opinion) a small, limited part of the kind of things it gets used for. I use AI mode a lot. It's better than Overviews in every conceivable way, and it's still not a chatGPT replacement.

https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...


I guess I use Google's AI that's built into their search engine the same way I've used ChatGPT or Claude. I ask it a question and it provides me an answer. I can then interact with it further if I want or look through the rest of the search results.


I use Google's AI much more often that OpenAI's but the URL I use is usually google.co.uk and so wouldn't appear if your numbers.


LLMs are not Markov Chains unless you contort the meaning of a Markov Model State so much you could even include the human brain.


Not sure why that's contorting, a markov model is anything where you know the probability of going from state A to state B. The state can be anything. When it's text generation the state is previous text to text with an extra character, which is true for both LLMs and oldschool n-gram markov models.


A GPT model would be modelled as an n-gram Markov model where n is the size of the context window. This is slightly useful for getting some crude bounds on the behaviour of GPT models in general, but is not a very efficient way to store a GPT model.


I'm not saying it's an n-gram Markov model or that you should store them as a lookup table. Markov models are just a mathematical concept that don't say anything about storage, just that the state change probabilities are a pure function of the current state.


You say state can be anything, no restrictions at all. Let me sell you a perfect predictor then :) The state is the next token.


Yes, technically you can frame an LLM as a Markov chain by defining the "state" as the entire sequence of previous tokens. But this is a vacuous observation under that definition, literally any deterministic or stochastic process becomes a Markov chain if you make the state space flexible enough. A chess game is a "Markov chain" if the state includes the full board position and move history. The weather is a "Markov chain" if the state includes all relevant atmospheric variables.

The problem is that this definition strips away what makes Markov models useful and interesting as a modeling framework. A “Markov text model” is a low-order Markov model (e.g., n-grams) with a fixed, tractable state and transitions based only on the last k tokens. LLMs aren’t that: they model using un-fixed long-range context (up to the window). For Markov chains, k is non-negotiable. It's a constant, not a variable. Once you make it a variable, near any process can be described as markovian, and the word is useless.


Sure many things can be modelled as Markov chains, which is why they're useful. But it's a mathematical model so there's no bound on how big the state is allowed to be. The only requirement is that all you need is the current state to determine the probabilities of the next state, which is exactly how LLMs work. They don't remember anything beyond the last thing they generated. They just have big context windows.


The etymology of the "markov property" is that the current state does not depend on history.

And in classes, the very first trick you learn to skirt around history is to add Boolean variables to your "memory state". Your systems now model, "did it rain The previous N days?" The issue obviously being that this is exponential if you're not careful. Maybe you can get clever by just making your state a "sliding window history", then it's linear in the number of days you remember. Maybe mix the both. Maybe add even more information .Tradeoffs, tradeoffs.

I don't think LLMs embody the markov property at all, even if you can make everything eventually follow the markov property by just "considering every single possible state". Of which there are (size of token set)^(length) states at minimum because of the KV cache.


The KV cache doesn't affect it because it's just an optimization. LLMs are stateless and don't take any other input than a fixed block of text. They don't have memory, which is the requirement for a Markov chain.


Have you ever actually worked with a basic markov problem?

The markov property states that your state is a transition of probabilities entirely from the previous state.

These states, inhabit a state space. The way you encode "memory" if you need it, e.g. say you need to remember if it rained the last 3 days, is by expanding said state space. In that case, you'd go from 1 state to 3 states, 2^3 states if you needed the precise binary information for each day. Being "clever", maybe you assume only the # of days it rained, in the past 3 days mattered, you can get a 'linear' amount of memory.

Sure, a LLM is a "markov chain" of state space size (# tokens)^(context length), at minimum. That's not a helpful abstraction and defeats the original purpose of the markov observation. The entire point of the markov observation is that you can represent a seemingly huge predictive model with just a couple of variables in a discrete state space, and ideally you're the clever programmer/researcher and can significantly collapse said space by being, well, clever.

Are you deliberately missing the point or what?


> Sure, a LLM is a "markov chain" of state space size (# tokens)^(context length), at minimum.

Okay, so we're agreed.


>Sure many things can be modelled as Markov chains

Again, no they can't, unless you break the definition. K is not a variable. It's as simple as that. The state cannot be flexible.

1. The markov text model uses k tokens, not k tokens sometimes, n tokens other times and whatever you want it to be the rest of the time.

2. A markov model is explcitly described as 'assuming that future states depend only on the current state, not on the events that occurred before it'. Defining your 'state' such that every event imaginable can be captured inside it is a 'clever' workaround, but is ultimately describing something that is decidedly not a markov model.


It's not n sometimes, k tokens some other times. LLMs have fixed context windows, you just sometimes have less text so it's not full. They're pure functions from a fixed size block of text to a probability distribution of the next character, same as the classic lookup table n gram Markov chain model.


1. A context limit is not a Markov order. An n-gram model’s defining constraint is: there exists a small constant k such that the next-token distribution depends only on the last k tokens, full stop. You can't use a k-trained markov model on anything but k tokens, and each token has the same relationship with each other regardless. An LLM’s defining behavior is the opposite: within its window it can condition on any earlier token, and which tokens matter can change drastically with the prompt (attention is content-dependent). “Window size = 8k/128k” is not “order k” in the Markov sense; it’s just a hard truncation boundary.

2. “Fixed-size block” is a padding detail, not a modeling assumption. Yes, implementations batch/pad to a maximum length. But the model is fundamentally conditioned on a variable-length prefix (up to the cap), and it treats position 37 differently from position 3,700 because the computation explicitly uses positional information. That means the conditional distribution is not a simple stationary “transition table” the way the n-gram picture suggests.

3. “Same as a lookup table” is exactly the part that breaks. A classic n-gram Markov model is literally a table (or smoothed table) from discrete contexts to next-token probabilities. A transformer is a learned function that computes a representation of the entire prefix and uses that to produce a distribution. Two contexts that were never seen verbatim in training can still yield sensible outputs because the model generalizes via shared parameters; that is categorically unlike n-gram lookup behavior.

I don't know how many times I have to spell this out for you. Calling LLMs markov chains is less than useless. They don't resemble them in any way unless you understand neither.


I think you're confusing Markov chains and "Markov chain text generators". A Markov chain is a mathematical structure where the probabilities of going to the next state only depend on the current state and not the previous path taken. That's it. It doesn't say anything about whether the probabilities are computed by a transformer or stored in a lookup table, it just exists. How the probabilities are determined in a program doesn't matter mathematically.


Just a heads-up: this is not the first time somebody has to explain Markov chains to famouswaffles on HN, and I'm pretty sure it won't be the last. Engaging further might not be worth it.


I did not even remember you and had to dig to find out what you were on about. Just a heads up, if you've had a previous argument and you want to bring that up later then just speak plainly. Why act like "somebody" is anyone but you?

My response to both of you is the same.

LLMs do depend on previous events, but you say they don't because you've redefined state to include previous events. It's a circular argument. In a Markov chain, state is well defined, not something you can insert any property you want to or redefine as you wish.

It's not my fault neither of you understand what the Markov property is.


By that definition n-gram Markov chain text generators also include previous state because you always put the last n grams. :) It's exactly the same situation as LLMs, just with higher, but still fixed n.


We've been through this. The context of a LLM is not fixed. Context windows =/ n gram orders.

They don't because n gram orders are too small and rigid to include the history in the general case.

I think srean's comment up the thread is spot on. This current situation where the state can be anything you want it to be just does not make a productive conversation.


'A Markov chain is a mathematical structure where the probabilities of going to the next state only depend on the current state and not the previous path taken.'

My point, which seems so hard to grasp for whatever reason is that In a Markov chain, state is a well defined thing. It's not a variable you can assign any property to.

LLMs do depend on the previous path taken. That's the entire reason they're so useful! And the only reason you say they don't is because you've redefined 'state' to include that previous path! It's nonsense. Can you not see the circular argument?

The state is required to be a fixed, well-defined element of a structured state space. Redefining the state as an arbitrarily large, continuously valued encoding of the entire history is a redefinition that trivializes the Markov property, which a Markov chain should satisfy. Under your definition, any sequential system can be called Markov, which means the term no longer distinguishes anything.


They only have the previous path in as much as n-gram Markov text generators have the previous path.


> An n-gram model’s defining constraint is: there exists a small constant k such that the next-token distribution depends only on the last k tokens, full stop.

I don't necessarily agree with GP, but I also don't think that a markov chain and markov generator definitions include the word "small".

That constant can be as large as you need it to be.


Well LLMs aren't human brains, unless you contort the definition of matrix algebra so much you could even include them.


QM and GR can be written as matrix algebra, atoms and electrons are QM, chemistry is atoms and electrons, biology is chemistry, brains are biology.

An LLM could be implemented with a Markov chain, but the naïve matrix is ((vocab size)^(context length))^2, which is far too big to fit in this universe.

Like, the Bekenstein bound means writing the transition matrix for an LLM with just 4k context (and 50k vocabulary) at just one bit resolution, the first row (out of a bit more than 10^18795 rows) ends up with a black hole >10^9800 times larger than the observable universe.


Yes, sure enough, but brains are not ideas, and there is no empirical or theoretical model for ideas in terms of brain states. The idea of unified science all stemming from a single ultimate cause is beautiful, but it is not how science works in practice, nor is it supported by scientific theories today. Case in point: QM models do not explain the behavior of larger things, and there is no model which gives a method to transform from quantum to massive states.

The case for brain states and ideas is similar to QM and massive objects. While certain metaphysical presuppositions might hold that everything must be physical and describable by models for physical things, science, which should eschew metaphysical assumptions, has not shown that to be the case.


Open AI sat on GPT-4 for 8 months and even released 3.5 months after 4 was trained. While i don't expect such big lag times anymore, generally, it's a given the public is behind whatever models they have internally at the frontier. By all indications, they did not want to release this yet, and only did so because of Gemini-3-pro.


Curriculum learning is not really a thing for these large SOTA LLM training runs (specifically pre-training). We know it would help, but ordering trillions of tokens of data in this way would be a herculean task.


I've heard things about pre-training optimization. "Soft start" and such. So I struggle to believe that curriculum learning is not a thing on any frontier runs.

Sure, it's a lot of data to sift through, and the time and cost to do so can be substantial. But if you are already planning on funneling all of that through a 1T LLM? You might as well pass the fragments through a small classifier before you do that.


>A pocket calculator that would give the right numbers 99.7% of the time would be fairly useless.

Well that's great and all, but the vast majority of llm use is not for stuff you can just pluck out a pocket calculator (or run a similarly airtight deterministic algorithm) for, so this is just a moot point.

People really need to let go of this obsession with a perfect general intelligence that never makes errors. It doesn't and has never existed besides in fiction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: