He means net loss to the status quo in reference to the entire fiasco. I had TikTok before… I still have TikTok… what rights were trampled in the process of bringing about zero change to me using tiktok?
I understand the hype. I think most humans understand why a machine responding to a query like never before in the history of mankind is amazing.
What you’re going through is hype overdose. You’re numb to it. Like I can get if someone disagrees but it’s a next level lack of understanding human behavior if you don’t get the hype at all.
There exists living human beings who are still children or with brain damage with comparable intelligence to an LLM and we classify those humans as conscious but we don’t with LLMs.
I’m not trying to say LLMs are conscious but just saying that the creation of LLMs marks a significant turning point. We crossed a barrier 2 years ago somewhat equivalent to landing on the moon and i am just dumb founded that someone doesn’t understand why there is hype around this.
The first plane ever flies, and people think "we can fly to the moon soon!".
Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.
And yet, the moon was reached a mere 66 years after the first powered flight. Perhaps it's a better heuristic than you are insinuating...
In all honesty, there are lots of connections between powered flight and space travel. Two obvious ones are "light and strong metallurgy" and "a solid mathematical theory of thermodynamics". Once you can build lightweight and efficient combustion chambers, a lot becomes possible...
Similarly, with LLMs, it's clear we've hit some kind of phase shift in what's possible - we now have enough compute, enough data, and enough know-how to be able to copy human symbolic thought by sheer brute-force. At the same time, through algorithms as "unconnected" as airplanes and spacecraft, computers can now synthesize plausible images, plausible music, plausible human speech, plausible anything you like really. Our capabilities have massively expanded in a short timespan - we have cracked something. Something big, like lightweight combustion chambers.
The status quo ante is useless to predict what will happen next.
>By that metric, there are lots of connections between space flight and any other aspect of modern society.
Indeed. But there's a reason "aerospace" is a word.
>No plane, relying upon air pressure to fly, can ever use that method to get to the moon
No indeed. But if you want to build a moon rocket, the relevant skillsets are found in people who make airplanes. Who built Apollo? Boeing. Grumman. McDonnell Douglas. Lockheed.
I feel like aeronautics and astronautics are deeply connected. Both depend upon aerodynamics, 6dof control, and guidance in forward flight. Advancing aviation construction techniques were the basis of rockets, etc.
Sure, rocketry to LEO asks more in strength of materials, and aviation doesn’t require liquid fueled propulsion or being able to control attitude in vacuum.
These aren’t unconnected developments. Space travel grew straight out of aviation and military aviation. Indeed, look at the vertical takeoff aircraft from the 40s and 50s, evolving into missile systems with solid propulsion and then liquid propulsion.
I thought your point was terrible about aerospace. And since you're insisting I follow you further into the analogy, I think it's terrible here.
LLMs may be a key building block for early AGI. The jury is still out. Will a LLM alone do it? No. You can't build a space vehicle from fins and fairings and control systems alone.
O1 can reach pretty far beyond past LLM capabilities by adding infrastructure for metacognition and goal seeking. Is O1 the pinnacle, or can we go further?
In either case, planes and rocket-planes did a lot to get us to space-- they weren't an unrelated evolutionary dead end.
> Yet powered flight has nothing to do with space travel, no connection at all.
The relationships you are describing are why airflight/spaceflight and AI/AGI are a good comparison.
We will never get AGI from an LLM. We will never fly to the moon via winged flight. These are examples of how one method of doing a thing, will never succeed in another.
Citing all the similarities between airflight and spaceflight makes my point! One may as well discuss how video games are on a computer platform, and LLMs are on a computer platform, and say "It's the same!", as say airflight and spaceflight are the same.
Note how I was very clear, and very specific, and referred to "winged flight" and "low/high pressure", which will never, ever, ever get one even to space. Nor allow anyone to navigate in space. There is no "lift" in space.
Unless you can describe to me how a fixed wing with low/high pressure is used to get to the moon, all the other similarities are inconsequential.
Good grief, people are blathering on about metallurgy. That's not a connection, it's just modern tech, has nothing to do with the method of flying (low/high pressure around the wing), and is used in every industry.
I love how incapable everyone has been in this thread of concept focus, incapable of separating the specific from the generic. It's why people think, generically, that LLMs will result in AGI, too. But they won't. Ever. No amount of compute will generate AGI via LLM methods.
LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".
One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.
The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".
Your point is good enough any spaceflight, despite some quibbling from commenters.
But I haven't seen where you make a compelling argument why it's the same thing in AI/AGI.
In your old analogy, we're all still the guys on the ground saying it'll work. You're saying it won't. But nobody has "been to space" yet. You have no idea if LLMs will take us to AGI.
I personally think they'll be the engine on the spaceship.
No amount of compute will generate AGI via LLM methods.
LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".
One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.
The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".
From where I sit, I don't even see LLMs as being some sort of memory store for AGIs even. The knowledge isn't reliable enough. An AGI would need to ingress and then store knowledge in its own mind, not use an LLM as a reference.
Part of what makes intelligence, intelligent, is the ability to see information and learn on the spot. And further to learn via its own senses.
Let's look at bats. A bat is very close to humans, genetically. Yet if somehow we took "bat memories", and were able to implant them in humans, how on earth would that help? How do you use bat memories of using sound for navigation, to "see" work? Of flying? Of social structure?
For example, we literally don't have them brain matter to see spatially the same way bats do. So when access those memories, they would be so foreign, that their usefulness is greatly reduced. They'd be confusing, unhelpful.
Think of it. Ingress of data and information is sensorially derived. Our mental image of the world depends upon this data. Our core being is built upon this foundation. An AGI using an LLM as "memories" would be experiencing something just as foreign.
So even if LLMs were used to allow an AGI to query things, it wouldn't be used as "memory". And the type of memory store that LLMs exhibit, is most certainly not how intelligence as we know it stores memory.
We base our knowledge upon directly observed and verified fact, but further based upon the senses we have. And all information derived from those senses is actually filtered, and processed by specialized parts of our brains, before we even "experience" it.
Our knowledge is so keyed in and tailored directly to our senses, and the processing of that data, that there is no way to separate the two. Our skill, experience, and capabilities are "whole body".
An LLM is none of this.
The only true way to create an AGI via LLMs would be to simulate a brain entirely, and then start scanning human brains during specific learning events. Use that data to LLM your way into an averaged and probabilistic mesh, and then use that output to at least provide full sense memory input to an AGI.
Even so, I suspect that may be best used to create a reliable substrate. Use that method to simulate and validate and modify that substrate so it is capable of using such data, thereby verifying that it stands solid as a model for an AGI's mind.
Then wipe and allow learning to begin entirely separately.
Yet to do even this, we'd need to ensure that sensor input at least to a degree enables the same sort of sense input. I think that Neuralink might be best in play to enable this, for as it works at creating an interface for, say, sight, and other senses... it could then use this same series of mapped inputs for a simulated human brain.
This of course works best with a physical form to also taste the environment around it, and who also is working on an actual android for day to day use?
You might say this focuses too much on creating a human style AGI, but frankly it's the only thing we can try to make and work into creating a true AGI. We have no other real world examples of intelligence to use, and every brain on the planet is part of the same evolutionary tree.
So best to work with something we know, something we're getting more and more apt at understanding, and with brain implants of the calibre and quality that neurolink is devising, something we can at least understand in far more depth than ever before.
> The first plane ever flies, and people think "we can fly to the moon soon!".
Yet powered flight has nothing to do with space travel, no connection at all.
You eventually said winged flight much later-- trying to make your point a little more defensible. That's why I started explaining to you the very big connections between powered flight and space travel ;)
I pretty much completely disagree with your wall of text, and it's not a very well reasoned defense of your prior handwaving. I'm going to move on now.
Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.
Winged flight == "low/high pressure" flight, it's how an airplane wing works and provides lift.
Maybe you just said what you wanted to say extremely poorly. Like "wing technology doesn't get you closer to space." I mean, of course, fins and distribution of pressure are important, but a relatively small piece.
On the other hand, powered flight and the things we started building for powered flight got us to the moon. "Powered flight" got us to turbojets, and turbomachinery is the number one key space launch technology.
Maybe you just said what you wanted to say extremely poorly.
Or maybe you didn't read closely? You claimed I didn't mention winged flight, yet I mentioned that and the method of winged flight. Typically, that means you say "Oh, sorry, I missed that" instead of blaming others.
I have refuted technology paths in prior posts. Refute those comments if you wish, but just restating your position without refuting mine doesn't seem like it will go anywhere.
And if you don't want a reply? Just stop talking. Don't play the "Oh, I'm going to say things, then say 'bye' to induce no response" game.
You gave a big wall of text. You made statements that can't really be defended. If you'd been talking just about wings, you could have made that clear (and not in one possible reading of a sentence that follows an absolutist one).
> Just debate fairly.
The thing I felt like responding to, you were like "noooo, i didn't mean that at all.
> > > > > Yet powered flight has nothing to do with space travel, no connection at all.
Pretty absolute statement.
> > > > > Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech.
Then, I guess you're saying this sentence is trying to restrict it to "airfoils aren't enough to go to space", and not talk about how powered flight lead directly to space travel... Through direct evolution of propulsion (turbo-machinery), control, construction techniques, analysis methods, and yes, airfoils.
I guess we can stay here debating the semantics of what you originally said if you really want to keep talking. But since you're walking away from what I saw as your original point, I'm not sure what you see as productive to say.
That’s not true. There was not endless hype about flying to the moon when the first plane flew.
People are well aware of the limits of LLMs.
As slow as the progress is, we now have metrics and measurable progress towards agi even when there are clear signs of limitations on LLMs. We never had this before and everyone is aware of this. No one is delusional about it.
The delusion is more around people who think other people are making claims of going to the moon in a year or something. I can see it in 10 to 30 years.
That’s not true. There was not endless hype about flying to the moon when the first plane flew.
I didn't say there was endless hype, I gave an example of how one technology would never result in another... even if to a layperson it seems connected.
(The sky, and the moon, are "up")
People are well aware of the limits of LLMs.
Surely you mean "Some people". Because the point in this thread is that there is a lot of hype, and FOMO, and "OMG AGI!" chatter running around LLMs. Which will never ever make AGI.
You said you didn’t comprehend why there was hype and I explained why there was hype.
Then you made an analogy and I said your analogy is irrelevant because nobody thinks LLMs are agi nor do they think agi is coming out of LLMs this coming year.
Actually, plenty of people think LLMs will result in AGI. That's what the hype is about, because those same people think "any day now". People are even running around saying that LLMs are showing signs of independent thought, absurd as it is.
And hype doesn't mean "this year" regardless.
Anyhow, I don't think we'll close this gap between our assessment.
And yet, the overall path of unconcealment of science and technological understanding definitely traces a line that goes from the Wright brothers to Vostok 1. There is no reason to think a person from the time of the Wright brothers would find it to be a simple one easily predicted by the methods of their times, but I doubt that no person who worked on Vostok 1 would say that their efforts were epochally unrelated to the efforts of the Wright brothers.
Ai can produce a new type of game where choices are dynamic and outcomes are generated by LLM agents. Fiction is an hallucination and LLMs are master hallucinators.
Basically LLMs have to be given assets and game components that they can easily compose.
> Fiction is an hallucination and LLMs are master hallucinators.
They're jacks of all trades, master of none.
This has its uses, but they have limits, and for now at least, those limits are under the threshold for that.
I have actually tried using them to make a text adventure to help learn German. The result was at the lower end of the quality range I've witnessed from LLM output: a nice first draft, not shippable, missing a core element, missing a lot of content, too simple, the kind of thing where you'd give the output of the LLM as a code challenge to a job candidate to see how they improve it.
An AI can easily produce a filler slop story. They struggle much harder with creating something new and interesting. For a CYOA type story for kids that might be reasonable, although they tend to make a hash of the details. There are more problems, like does the AI know when to stop? Can it recognize or generate a bad end or a good end without explicit instruction from the player?
Something like:
Generate a choose your own adventure story about a young boy shipwrecked on an island populated by hostile pirates. A hidden cave holds treasure. There is a jungle on the island. Dangerous jungle creatures inhabit the island and the boy can not fight. Also hidden on the island is a boat. Each story section is around 200 words long and ends with a multiple choice question for the player to select which path they want to pursue next. The story is complete either when the boy dies or finds the boat, after no more than 20 story segments.
I have some doubts the AI will be able to handle all of that and keep it interesting and coherent. This sort of storytelling requires some attention to detail that LLMs usually struggle with.
Nah I think not considering this idea at all is the extremely dumb and brain dead opinion. LLMs can tell stories. Realistic causes and effects aren’t even consistent in human stories. A good story isn’t 100 percent dependent on this.
The LLM walks the line between hallucinating too much and sometimes not. Either way you can pretty much guarantee that almost all stories made in games now are already mostly written by an LLM. It’s just the writing is edited and curated by a human.
I've done a short LLM-powered VN, and LLM actions were restricted to local interactions only because of how weak it is at making up the story. It's great at removing the parser-based interactions, but I think that's it.
There's a second technical problem that such stories are represented by a form of state-machine and that you would need to recompile it on the fly, making many checks very difficult (you would need to be able to check reachability on the fly, chunk transitions, etc). I think it would take years to get to the level of some of the great IF games with an LLM, and not just a cool PoC.
As a grumpy old symbolic ai hand I do wonder if it was possible to build a (perhaps crude) ontology based simulation with consistency, cause and effect and so on and then use the results of that for prompting an LLM.
But as a consumer, I lean far to the side of "give me a handcrafted tunnel experience with the illusion of choice" in the divide between consequences yes or no. I don't think I'd actually want this "simulation behind an LLM facade". If I'm in the mood for reading (or for listening to voice actors reading to me), I'd rather have it be something more meaningful than just a game state. But to those on the other end of the spectrum, this might actually be the holy grail of game building.
Is GPU improvement is driven more by gaming then by AI hype? Gaming is the biggest industry and there is real money coming from that. Does speculative money from VCs actually overshadow actual money spent by consumers?
I know stock prices is driven by AI hype but how much does it actually effect the bottom line of Nvidia? I think GPU improvement happens regardless of AI.
Datacenter revenue alone is ~10x of gaming. The datacenter revenue is thought to have literally ~100x the earnings all up (H100 and 4090 have similar transistor counts but the H100 sells for over $30k while the 4090 sells for $2k which indicates huge margins).
Gaming is pretty much insignificant for nvidia. That’s why nvidias stock has 10x’ed recently and their PE looks better now than it did 5 years ago despite that stock increase. They found a new market that dwarfs their old market.
NVIDIA’s net income grew ~580% year-on-year in their 2024 fiscal year. FY2025 is on track for 100%+ growth, essentially 14x in the last 2 years. This is not coming from gaming, “AI hype” is having a huge effect on NVIDIA’s bottom line.
It all depends on whether AI companies can continue to find significant improvements to their models this year. Are transformers reaching their limits? Can researchers find the next level of performance or are we headed for another AI slump?
Interpreting your question about "GPU improvement" from a product perspective, my read is that NVIDIA is of course targeting AI applications and the datacenter. To that end it just focuses on silicon that makes most sense for AI compute, and not so much for gaming.
Of course, the GPUs for the datacenter and for gaming are the same designs, so my read is that in gaming NVIDIA makes up for lack of actual performance for traditional rendering by pushing technologies that can utilize tensor cores like AI upscaling, frame prediction, ray tracing + denoising, etc.., that don't actually contribute to game graphics as much as they could have if they did an architecture tailored to gaming needs instead, with the technologies that they have. It's also sexier in theory to talk about exclusive AI-powered technologies proprietary to NVIDIA than just better performance.
"NVIDIA’s financial report reveals that its data center business revenue in FY2Q25 grew by 154% YoY, outpacing other segments and raising its contribution to total revenue to nearly 88%."
Gaming almost doesn't even register in Nvidias revenue anymore.
But I do think Jensen is smart enough to not drop gaming completely, he knows the AI hype might come and go and competitors might finally scrounge up some working SDKs for the other platforms.
Gaming is effectively irrelevant to nvidia now. The stock appreciation over the last 8 years that brought them from a niche to a global dominant company is all from revenue that first came in from crypto and then got absolutely dwarfed by AI.
ML was big before LLMs and nVidia was already making a killing from selling expensive GPUs that would never draw a single polygon ca 2015. They've been hobbling FP64 (double precision) support in cheaper "consumer" GPUs, to prevent their use in most data centers, for a long time too.
Looking into this a bit, it seems that nVidia was still making most of its money from "gaming" as recently as early 2022. I'd suspect that, if crypto mining could be separated out, the transition point actually happened a couple of years earlier, but nevertheless the datacenter segment didn't become dominant until about 2-2.5 years ago. It existed well before then, but wasn't that big.
I came into this rolling my eyes thinking he acquired some ability to learn and be humble or some programming or management topic... the typical bs you see on hacker news.
You know how you can make your eyes see double when you cross your eyes a bit? Do this and you get 4 images. Combine align the center 2 images and your eyes will automatically “lock on”.
Try it first with your fingers. Hold up your index fingers pointing straight up in front of the computer screen but in front of your face. Focus on the screen. The fingers should divide into four. Move your fingers until they combine.
If you focus on your fingers you can do the same thing to the screen.
I believe that, 15 years ago was peak deadmau5, skrillex, dubstep explosion, EDC expanding everywhere. No way globally its more popular now then it was in the 2010s
It's definitely more popular now. Unless you're still deeply in the scene you wouldn't see it, but there are so many massive acts now, far more than there were back then, and niche genres have become much more massive. E.g. techno artists can draw massive crowds and they never were doing that in the 2010s.
But compiled code loses a lot of the "extra" data. Also these are "language" models so I would be surprised if training on binaries was much more efficient versus writing in some kind of language.
Besides, how do you even check the result now without running untrusted code? Every run of the model you need to reverse-engineer the binary?
Doing this requires higher IQ. Believe it or not a ton of people literally don’t do this because they can’t. This ability doesn’t exist for them. Thousands of pages of code is impossible to understand line by line for them. This separation of ability is very very real.
reply