Imagine if life had evolved on Earth or Theia prior to impact. Imagine if it was intelligent and played witness to the giant cataclysm.
Given that intelligence took an awfully long time to emerge from LUCA, that seems implausible. But it's fun to imagine pre-Theia "Silurians". That sort of impact would have scorched earth of any trace or remnant of their existence. It feels as though there must be sufficiently advanced civilizations out there witnessing this exact scenario play out without the necessary technology to stop it. Though that fate would be horrifying.
Another thing to think about is that shortly after the Big Bang (if there was one, Lamda-CDM or similar models holding up), was that shortly after the Big Bang the temperature of the early universe was uniformly 0-100 degrees Celsius. It may have been possible for life to originated in this primordial interstellar medium without even so much as needing a host planet or star! Just life coalescing in space itself.
That early primordial soup, if it existed, could have seeded the whole universe. Most aliens might have matching molecules and chirality if those decisions predate our galaxy.
Also more importantly: it was uniformly warm. No gradients.
Life on earth doesn't work because we get energy from the sun. It works because we get low entropy energy from the sun and can radiate high entropy energy into cold space.
> It feels as though there must be sufficiently advanced civilizations out there witnessing this exact scenario play out without the necessary technology to stop it. Though that fate would be horrifying.
I suspect this is not actually that common. Giant impacts are more common in early solar systems; things eventually settle into nice circular orbits like we have now. Whereas intelligent life does seem to take a while to evolve, so probably more common later in a solar system's life cycle.
Depends how you count it. One planet, but a solid handful of mass extinctions with big adaptive radiations, most with several million years of development, and tons of reasonably intelligent social animals, only one of which produced industrial civilization. But yes, working with the best evidence we have...
Our sun and earth won't last long enough, but Mercury's orbit is potentially unstable.
A red dwarf might harbor live bearing planets long enough to see its long-lived orbits eventually destabilize. Or perhaps witness the even rarer interstellar collision or destabilization from rogue planets, etc.
He makes no claim to what life was before the formation of the moon, but rather than the cataclysm of formation of the moon is what changed life on Earth - of course he wraps the whole idea up in his mysticism - but his 1950s writing on this was not far off what happened.
I remember there being some modeling done to determine whether the Theia impact blew a chunk off Earth or basically re-liquified the planet. If I recall correctly, the resulting hypothesis was that the thermal load would have re-melted at least the crust (evidence for this was stacking of density in the moon, suggesting it formed out of a basically completely-liquified ball, which would have implied the crust was also liquified).
There is some interesting evidence suggesting the deeper layers remained intact, in the form of a region under the Pacific that might be the impact scar. It's an inexplicably-dense zone that causes hot-spots at its corners resulting in increased surface volcanism, like how the edges of a leaf burn before the middle in a fire.
According to physics simulations, the atmosphere was composed of vaporised rock for several centuries after the impact until it cooled down below 1000 degrees. The entire surface was magma. https://en.wikipedia.org/wiki/Synestia
No; I don't remember the article saying specifically but I would assume if there is no solid land left, there is no liquid water left either. Water molecules would have been blasted into the "crust soup" and eventually re-condensed into gaseous water and eventually liquid water via atmospheric regeneration after the surface settled down a bit (because the chemicals that could be gaseous would have tended to float to the top of the soup as it settled down).
The pre-Theia "Silurians" as you call them, depending on technological level could have left traces in the solar system like our Parker Solar Probe or something in the Lagrange points.
Then again, how well do we know of stuff in these spaces today? It seems to me we barely have a clue of the space junk we ourselves sent up orbiting in our backyard.
The early universe was actually billions of degrees immediately post-Big Bang and remained far too hot for liquid water for hundreds of thousands of years, only reaching "habitable" temperatures long after matter had already begun forming into structures.
We certainly cannot know if a previous intelligent civilization was present on earth before the impact.
And is not even that the impact erased their trace from earth, is that if they got extinct long enough (more than 100 million years ago), with today's technology, we cannot infer their presence, given the fossil (and other types of) record.
Temperature on its own wouldn't be enough for life would it? Isn't everything moving around way too fast after the Big bang and therefore too far apart for whatever life there would be to find food (or whatever equivalent source of energy)
Temperature isn’t even close to being enough. If we didn’t have a moon, despite everything else being so good for life, we may have been stuck at the bacterial phase if we didn’t have tides, or life may have never formed at all due to minerals not being recycled, tide pools not concentrating amino acids, and constant wet-dry phases driving evolutionary pushes.
Edit: beyond that, there’s the need for a stable orbit, a stable axial tilt, a stable star (few mega flares), some kind of galactic shield a la Jupiter, and more.
> Imagine if life had evolved on Earth or Theia prior to impact.
There's good reason why this first era of Earth's history is called the "Hadean eon" - as in the fires of hell.
> Throughout part of the eon, impacts from extraterrestrial bodies released enormous amounts of heat that likely prevented much of the rock from solidifying at the surface. As such, the name of the interval is a reference to Hades, a Greek translation of the Hebrew word for hell.
>Imagine if it was intelligent and played witness to the giant cataclysm.
They'd have had to be watching from one very high orbit, and even then I wouldn't bet favorably on their chances considering the sheer gargantuan volume of debris Theia tossed into the space above earth.
Anywhere terrestrial and they'd be dead far too quickly to watch much of anything. I've seen some fairly detailed models on the presumed effects of this collision and it would have rapidly super-heated the whole Earth's atmosphere while vaporizing or melting the crust of the entire world down to a depth of at least several hundred meters. Good luck finding a bunker that can handle that.
If it weren't for Adobe's crappy support of the player, I would agree, but they did much more harm than good with it. It was a massive attack surface and they didn't care about closing their zero-day drive-by exploits in a sensible timeframe.
Also they were basically the founders of persistent fingerprinting via Flash cookies.
So no, thank you, I'm more than happy it didn't thrive more than it already did.
SWF was simultaneously brilliant and a festering wound that required amputation, and I would have welcomed a replacement that wasn't the biggest attack surface on the internet. I too love Homestar Runner.
IMO the fact that it belonged to Adobe was the biggest problem, if SWF had been managed by a more capable software org it could have been maintained in a way that kept it from getting banned from the internet. And remember, that's how bad it was - it got banned from the internet because it was absolutely indefensible to leave it around. SWF getting cancelled magically stopped every single family member I have from calling me with weird viruses and corruption they managed to stumble into. I saw more malicious code execution through SWF than I saw from my dumb little cousins torrenting sus ROMs and photoshop crackers. I'd rather not have it than have those problems persist.
absolutely. really is strange that you used to be able to download a music video in less than 2-3mb with lossless video quality, but now that's not really a thing anymore. I feel like if Adobe didn't get greedy and encourage its use for absolutely everything (and/or web standards got up to speed faster) people wouldn't wouldn't approach talking about Flash with the 10-foot pole they often do today (as a platform—not how everyone talks about how much they loved flash games)
What do you mean by “HD music video”? If you mean a literal video, then today’s video and audio codecs are more efficient than what Flash used, not less. If the music videos were that small then they must have given up a lot in quality. If you mean a Flash vector animation, then that’s different of course, but that doesn’t describe a typical music video.
Conventional video codecs are also pretty good at compressing animations. I once made a multi-minute animation of a plane taking off and H.264 compresses it to hundreds of kilobytes.
yes stuff like that & the IOSYS MVs. you technically can do stuff like that today theres nothing stopping you from doing it with svgs but i meant more the social part of it. its just interesting that if you want to do the same thing (put an animated video on the inernet) the usual way its now 10x bigger yet looks worse.
also i dont think theres anything like Flash (the authoring software) but for SVGs. i hope there is one but for now I wouldnt say inkscape + a text editor counts
People loved the games, but not the super custom flash based menu that requires a loading bar and works totally different and slightly janky on each website.
That's because people have more bandwidth today and therefore videos online are higher quality now. You can easily transcode a music video to 3MB using modern codecs (and even not so modern ones like H.264), and it will look somewhat worse than typical online video sites but still pretty good.
Honestly, we can have that today. The real power of Flash was the fully integrated development environment. It was one of the first programming experiences I had, and all I needed to do amazing stuff was a book and a copy of Flash MX.
Adobe needed to take Flash seriously as a platform. Instead they neglected it, making it synonymous with crashes and security problems, and they milked developers as much as possible.
I bought Flash once. I found a crashing bug and jumped through hoops reporting it. A year or so later, they updated the ticket to suggest I drop $800 for the privilege of seeing whether it had been fixed. I did not make the mistake of giving them money ever again.
They had such an opportunity to take advantage of a platform with a pre-iPhone deployment in the high 90% range, and they just skimped it into oblivion. What a disgrace for everyone who actually cared.
This is another axis separate or orthogonal to worldbuilding.
Recent Marvel and Disney films, the Jurassic Park and Star Wars sequels, and most Godzilla / Kong slop doesn't build believable worlds. The writers don't spend any time writing the universe that the story takes place in.
Lord of the Rings (the theatrical film trilogy), Game of Thrones (save for the last seasons), and Jurassic Park (1993) all build vast and credible worlds. Intricately detailed, living and breathing universes. Backstories, histories, technologies, warring factions, you name it. They then create believable characters that occupy those worlds and give them real character arcs within which they suffer, rise to prominence, grow, and die. Multiple heroes with multiple journeys. You're fully immersed in the fictional world, watching characters you care about occupying it. It's masterful storytelling.
Villeneuve's Dune has the same vast world and literature to draw upon as many of the other great epics, but he makes the rare mistake of not communicating anything to you about it. If you haven't read the books, much of the story is easily lost. He doesn't spend time on character arcs or even as much as dropping hints to what the subtitles of the world are. It's a super rare misstep, because most bad storytelling is from under baking the fictional world.
Then there's the mistake of sequels that try to expand on the mystery of the original world. The Matrix films and countless others have over-illuminated the mystery of their stories in trying to build universes. In doing so, the magic has been lost.
> [Dune] makes the mistake of not communicating anything to you about it. If you haven't read the books, much of the story is easily lost.
Counterpoint: my wife. I took her into Dune knowing nothing at all about it, besides how excited I was to see it, and she got everything. Like, seriously, everything. She's a super intelligent and intuitive person, and Villeneuve is one of her favorite directors so she's maybe the ideal audience member.
It might be fair say that the exposition is too subtle for a general audience to pick up, but it's certainly there. I refuse to hold that against the film, though. The usual state of Hollywood movies is to browbeat an audience with heavy-handed explanations, so I love it that Villeneuve makes you pay attention and think and remember and put together clues to understand everything that's going on. It's sophisticated filmmaking, dammit, and there's not enough of that around - especially in big-budget / sci-fi / franchise films.
In my opinion that's what makes Villeneuve's so great. For example, I think almost any other director would have had an info dump about what Mentat's are in the Dune universe, motivations and they they are important. Instead in Villeneuve's version, you simply see the results. For those watching the film without the context you simply chalk it up to a weird and wonderful way that the universe works. For those that have read the book, you get to do the information dump about Mentat's on your poor unexpecting wife who's watching the film with you.
This embodies show don't tell and it works amazingly.
> This embodies show don't tell and it works amazingly.
That's not "show, don't tell". That's "you need the companion book".
A masterclass in "Show, don't tell" is the intro to Pixar's "Up". If you haven't seen it, you absolutely must.
"Show, don't tell" isn't stuff that is lost on the uninitiated. It's stuff that is masterfully communicated without the need for corny expository dialogue.
Villeneuve's mentats are like an adult joke in a kid film.
The films don't really give themselves a need to explain the mentats beyond "they're good at maths".
I do think they could have done better at showing that mentats are capable of huge feats of computation and planning and take the place of advanced computers, and that wouldn't need exposition. The "answer a numerical question with unnecessary decimal places" trope was worn when Commander Data did it for the millionth time. Moreover, it was something that seemed like a simple multiplication: something normal humans who are good at mental arithmetic can do. Having Thufir do the eye thing to deduce the exact location of the hunter-killer agent based on a huge stream of data would have been a good way to do it, for example. That would have made it clearer that Thufir (and by extension Piter via the lip tattoo) was more than a uniformed wedding planner and is actually a powerful, indispensable and dangerously skilled superhuman.
Likewise having someone lament that, say, an ornithopter or carryall could use an autopilot and someone reply "ha, yes, and get the planet nuked from orbit by the Great Families for harbouring a thinking machine, not a good plan" would have shown the approximate limits on technology leading to the need for mentats.
Not showing that didn't really affect the story they did choose tell (i.e. one that, for example, doesn't ever mention or allude to the Butlerian Jihad), but I think they could have added just a little more useful depth without it just being superfluous book details added for the book fans to notice.
One wonders if they left out the war on thinking machines as being at risk of breaking the suspension of disbelief for being too (pre-!)derivative of the Matrix and being overly close to current zeitgeist with LLMs dominating every conversation.
You don't need to know that the character is a mentat. The story works perfectly well without that knowledge. But if you do then it adds a second layer to the scene. Much like watching something like the early Simpson's is even better if you have a grounding in the novels and movies that they're parodying but isn't required to get the show.
> A masterclass in "Show, don't tell" is the intro to Pixar's "Up". If you haven't seen it, you absolutely must.
I have seen it quite some time ago, please point out some clips where you feel the show don't tell is executed well.
Disney invited me to talk about my GenAI startup and research in front of a bunch of their execs across ABC, ESPN, Pixar, Streaming, etc. All of their folks were super nice and gracious to our small startup except for one.
Steve May basically scoffed at how little my small team could accomplish. Mind you we were using mocapped skeletal animation and object animation curves to fully steer video diffusion over a year and a half ago. Before image to video modalities. He picked apart our training and engineering and gloated that they could do better.
The incident is seared into my brain.
I can't help but think of Disney as the Empire and Pixar as the Death Star.
If he was upset - it wasn't how little your team could accomplish.
That would bring the feeling of admiration.
It's most likely projection of their inability to deliver results.
> In the future we're going to regret breathing air. It's the accelerant for so many health problems.
A non-oxygen dependent energy system for the human cell is the only option moving forward. We need to utilize a clean energy source like sunlight and dump that oxygen dependency once and for all. Cyanobacteria was a crutch dependency that helped bootstrap that whole life thing pretty quickly for the demo. We have a proven concept now that we know work. Can we leave the idea to use oxygen back in the GOE era where it belongs now? Building all this complexity on top of a fundamentally flawed bases like oxygen reactivity was the main mistake.
I love the satire. (Did I really write "regret breathing air" before editing or something?)
Oxygen is an amazingly energy rich fuel and is super abundant for us. The oxygenation of earth was one of the key steps, and it might be a "hard step" for other civilizations.
Of course oxidation causes a lot of damage and byproducts, and is one of the causes of our aging and death. But without it, well...
I was primarily referring to particulate matter suspended within the air we breathe. High PPM / particulate measurably reduces lifespan in several population studies, and it also produces noticeable pulmonary and cardiac disease states.
shit sorry that way my mistake. lol. I meant to add a strike through "bad" but realized HN doesn't have strike through, so I deleted it and thought I added a "FTFY" but forgot to add it. sorry about that.
No worries at all! I frequently edit my comments on HN to better wordsmith my arguments, so I half thought I'd made the mistake and been caught in a slip up.
My problem with AGI is the lack of a simple, concrete definition.
Can we formalize it as giving out a task expressible in, say, n^m bytes of information that encodes a task of n^(m+q) real algorithmic and verification complexity -- then solving that task within a certain time, compute, and attempt bounds?
Something that captures "the AI was able to unwind the underlying unspoken complexity of the novel problem".
I feel like one could map a variety of easy human "brain teaser" type tasks to heuristics that fit within some mathematical framework and then grow the formalism from there.
>My problem with AGI is the lack of a simple, concrete definition.
You can't always start from definitions. There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, consciousness, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
After researching this a fair amount, my opinion is that consciousness/intelligence (can you have one without the other?) emerges from some sort of weird entropy exchange in domains in the brain. The theory goes that we aren't conscious, but we DO consciousness, sometimes. Maybe entropy, or the inverse of it, gives way to intelligence, somehow.
This entropy angle has real theoretical backing. Some researchers propose consciousness emerges from the brain's ability to integrate information across different scales and timeframes. This would essentially create temporary "islands of low entropy" in neural networks. Giulio Tononi's Integrated Information Theory suggests consciousness corresponds to a system's ability to generate integrated information, which relates to how it reduces uncertainty (entropy) about its internal states. Then there is Hammeroff and Penrose, which I commented about on here years ago and got blasted for it. Meh. I'm a learner, and I learn by entertaining truths. But I always remain critical of theories until I'm sold.
I'm not selling any of this as a truth, because the fact remains we have no idea what "consciousness" is. We have a better handle on "intelligence", but as others point out, most humans aren't that intelligent. They still manage to drive to the store and feed their dogs, however.
A lot of the current leading ARC solutions use random sampling, which sorta makes sense once you start thinking about having to handle all the different types of problems. At least it seems to be helping out in paring down the decision tree.
Yep, it's what I've heard most often too (I only just learnt the other meaning from the kind person prompting me look up the definition). I also don't think of it as that much of an unusual word, but hey.
There are lots of VFX professionals on LinkedIn having a total field day with AI tools and posting mind-blowing stuff. Somehow it hasn't reached the rest of social media yet.
On that last point: AI is going to propel individual artists ahead of big Hollywood studios. They won't need studio capital anymore, and they'll be able to retain all the upside themselves.
It doesn't make them not an artist, but it might make them a bad artist. That image gets more nonsensical the more you look at it. How big is that skull in the foreground supposed to be?
LLMs are useful in that respect. As are media diffusion models. They've compressed the physics of light, the rules of composition, the structure of prose, the knowledge of the internet, etc. and made it infinitely remixable and accessible to laypersons.
AGI, on the other hand, should really stand for Aspirationally Grifting Investors.
Superintelligence is not around the corner. OpenAI knows this and is trying to become a hyperscaler / Mag7 company with the foothold they've established and the capital that they've raised. Despite that, they need a tremendous amount of additional capital to will themselves into becoming the next new Google. The best way to do that is to sell the idea of superintelligence.
AGI is a grift. We don't even have a definition for it.
I hate the "accessible to the layperson" argument.
People who couldn't do art before, still can't do art. Asking someone, or something else, to make a picture for you does not mean you created it.
And art was already accessible to anyone. If you couldn't draw something (because you never invested the time to learn the skill), then you could still pay someone else to paint it for you. We didn't call "commissioning a painting" as "being an artist", so what's different about "commissioning a painting from a robot?"
> I hate the "accessible to the layperson" argument.
Accessible to a layperson also means lowering the gradient slope of learning.
Millions of people who would have never rented a camera from a rental house are now trying to work with these tools.
Those publishing "slop" on TikTok are learning the Hero's Journey and narrative structure. They're getting schooled on the 180-degree rule. They're figuring out how to tell stories.
> People who couldn't do art before, still can't do art. Asking someone, or something else, to make a picture for you does not mean you created it.
Speak for yourself.
I'm not an illustrator, but I'm a filmmaker in the photons-on-glass sense. Now I can use image and video models to make animation.
I agree that your average Joe isn't going to be able to make a Scorsese-inspired flick, but I know what I'm doing. And for me, these tools open an entire new universe.
Something like this still takes an entire week of work, even when using AI:
There's lots of editing, rotoscoping, compositing, grading, etc. and the AI models themselves are INSANELY finicky and take a lot of work to finesse.
But it would take months of work if you were posing the miniatures yourself.
With all the thought and intention and work that goes into something like this, would you still say it "does not mean you created it"? Do you still think this hasn't democratized access to a new form of expression for non-animators?
AI is a creative set of tools that make creation easier, faster, more approachable, and more affordable. They're accessible enough that every kid hustling on YouTube and TikTok can now supercharge their work. And they're going to have to use these tools to even stay treading water amongst their peers, because if they don't use them, their competition (for time and attention) will.
I an not an expert but I have a serious counterpoint.
While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.
It seems trivial to do unsupervised training on scientific data, for instance, such as star movements, and discover closed-form analytic models for their movements. Deriving Kepler’s laws and Newton’s equations should be fast and trivial, and by that afternoon you’d have much more profound models with 500+ variables which humans would struggle to understand but can explain the data.
AGI is what, Artificial General Intelligence? What exactly do we mean by general? Mark Twain said “we are all idiots, just on different subjects”. These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion. Yes occasionally they stumble or make a mistake, but overall it is very impressive.
And remember — if we care about practical outcomes - as soon as ONE model can do something, ALL COPIES OF IT CAN. So you can reliably get unlimited agents that are better than 90% of humans at understanding every subject. That is a very powerful baseline for replacing most jobs, isn’t it?
Anthropomorphization is doing a lot of heavy lifting in your comment.
> While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.
Is it intelligence and understanding that emerges, or is applying clever statistics on the sum of human knowledge capable of surfacing patterns in the data that humans have never considered?
If this were truly intelligence we would see groundbreaking advancements in all industries even at this early stage. We've seen a few, which is expected when the approach is to brute force these systems into finding actually valuable patterns in the data. The rest of the time they generate unusable garbage that passes for insightful because most humans are not domain experts, and verifying correctness is often labor intensive.
> These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion.
Again, exceptional pattern matching does not imply understanding. Just because these tools are able to generate patterns that mimic human-made patterns, doesn't mean they understand anything about what they're generating. In fact, they'll be able to tell you this if you ask them.
> Yes occasionally they stumble or make a mistake, but overall it is very impressive.
This can still be very impressive, no doubt, and can have profound impact on many industries and our society. But it's important to be realistic about what the technology is and does, and not repeat what some tech bros whose income depends on this narrative tell us it is and does.
Well, you have to define what you mean by "intelligence".
I think it's just not been enough time, we can take the current LLM technology and just put it in a pipeline that includes 24/7 checking work and building up knowledge bases.
A lot of the stuff that you think is "new and original ideas" are just like prompting an LLM to "come up with 20 original variations" or "20 original ways to combine" some building blocks it already has been trained on or have been added into its context. If you do this frequently enough, and make sure to run acceptance tests (e.g. unit testing or whatever is in your domain) then you can really get quite far. In fact, you can generate the tests themselves as well. What's missing, essentially, is autonomous incremental improvements, involving acceptance testing and curation, not just generation. Just like a GAN does when it generates novel images.
"Exceptional pattern matching does not imply understanding." - You'll have to define what you mean by "understanding". I think we have to revisit the Chinese Room argument by John Searle. After all, if the book used by the person in the room is the result of training on Chinese, then "the whole Chinese room" with the book and operator may be said to "understand" Chinese.
It's not just pattern matching but emergent structures in the model, that is a non-von-neumann architecture, when it's being trained. Those structures are able to manipulate symbols in ways that are extremely useful and practical for an enormously wide range of applications!
If by "understand" we mean "meaningfully manipulate symbols and helpfully answer a wide range of queries" about something, then why would you say LLMs don't understand the subject matter? Because they sometimes make a mistake?
The idea that artificial intelligence or machines have to understand things exactly in the same way as humans, while arriving at the same or better answers, has been around for quite some time. Have you seen this gem by Richard Feynman from the mid 1980s? https://www.youtube.com/watch?v=ipRvjS7q1DI ("Can Machines Think?")
> Well, you have to define what you mean by "intelligence".
The burden of defining these concepts should be on the people who wield them, not on those who object to them. But if pressed, I would describe them in the context of humans. So here goes...
Human understanding involves a complex web of connections formed in our brains that are influenced by our life experiences via our senses, by our genetics, epigenetics, and other inputs and processes we don't fully understand yet; all of which contribute to forming a semantic web of abstract concepts by which we can say we "understand" the world around us.
Human intelligence is manifested by referencing this semantic web in different ways that are also influenced by our life experiences, genetics, and so on; applying creativity, ingenuity, intuition, memory, and many other processes we don't fully understand yet; and forming thoughts and ideas that we communicate to other humans via speech and language.
Notice that there is a complex system in place before communication finally happens. That is only the last step of the entire process.
All of this isn't purely theoretical. It has very practical implications in how we manifest and perceive intelligence.
Elsewhere in the thread someone brought up how Ramanujan achieved brilliant things based only on basic education and a few math books. He didn't require the sum of human knowledge to advance it. It all happened in ways we can't explain which only a few humans are capable of.
This isn't to say that this is the only way understanding and intelligence can exist. But it's the one we're most familiar with.
In stark contrast, the current generation of machines don't do any of this. The connections they establish aren't based on semantics or abstract concepts. They don't have ingenuity or intuition, nor accrue experience. What we perceive as creativity depends on a random number generator. What we perceive as intelligence and understanding works by breaking down language written by humans into patterns of data, assigning numbers to specific patterns based on an incredibly large set of data manually pre-processed by humans, and outputting those patterns by applying statistics and probability.
Describing that system as anything close to human understanding and intelligence is dishonest and confusing at best. It's also dangerous, as it can be interpreted by humans to have far greater capability and meaning than it actually does. So the language used to describe these systems accurately is important, otherwise words lose all meaning. We can call them "magical thinking machines", or "god" for that matter, and it would have the same effect.
So maybe "MatMul with interspersed nonlinearities"[1] is too literal and technical to be useful, and we need new terminology to describe what these systems do.
> I think we have to revisit the Chinese Room argument by John Searle.
I wasn't familiar with this, thanks for mentioning it. From a cursory read, I do agree with Searle. The current generation of machines don't think. Which isn't to say that they're incapable of thinking, or that we'll never be able to create machines that think, but right now they simply don't.
What the current generation does much better than previous generations is mimicking how thoughts are rendered as text. They've definitively surpassed the Turing test, and can fool most humans into thinking that they're humans via text communication. This is a great advancement, but it's not a sign of intelligence. The Turing test was never meant to be a showcase of intelligence; it's simply an Imitation Game.
> Those structures are able to manipulate symbols in ways that are extremely useful and practical for an enormously wide range of applications!
I'm not saying that these systems can't be very useful. In the right hands, absolutely. A probabilistic pattern matcher could even expose novel ideas that humans haven't thought about before. All of this is great. I simply think that using accurate language to describe these systems is very important.
> Have you seen this gem by Richard Feynman from the mid 1980s?
I haven't seen it, thanks for sharing. Feynman is insightful and captivating as usual, but also verbose as usual, so I don't think he answers any of the questions with any clarity.
It's interesting how he describes pattern matching and reinforcement learning back when those ideas were novel and promising, but we didn't have the compute available to implement them.
I agree with the point that machines don't have to mimic the exact processes of human intelligence to showcase intelligence. Planes don't fly like birds, cars don't run like cheetahs, and calculators don't solve problems like humans, yet they're still very useful. Same goes for the current generation of "AI" technology. It can have a wide array of applications that solve real world problems better than any human would.
The difference with those examples and intelligence is that something either takes off the ground and maintains altitude, or it doesn't. It either moves on the ground, or doesn't. It either solves arithmetic problems, or doesn't. I.e. those are binary states we can easily describe. How this is done is an implementation detail and not very important. Whereas something like intelligence is very fuzzy to determine, as you point out, and we don't have good definitions of it. We have some very basic criteria by which we can somewhat judge whether something is intelligent or not, but they're far from reliable or useful.
So in the same way that it would be unclear to refer to airplanes as "magical gravity-defying machines", even though that is what they look like, we label what they do as "flight" since we have a clear mental model of what that is. Calling them something else could potentially imply wrong ideas about their capabilities, which is far from helpful when discussing them.
And, crucially, the application of actual intelligence is responsible for all advancements throughout human history. Considering that current machines only excel at data generation, and at showing us interesting data patterns we haven't considered yet, not only is this a sign that they're not intelligent, but it's a sign that this isn't the right path to Artificial General Intelligence.
Hopefully this clarifies my arguments. Thanks for coming to my TED talk :)
> Superintelligence is not around the corner. OpenAI knows this and is trying to become a hyperscaler / Mag7 company with the foothold they've established and the capital that they've raised.
+1 to this. I've often wondered why OpenAI is exploring so many different product ideas if they think AGI/ASI is less than a handful of years away. If you truly believe that, you would put all your resources behind that to increase the probability / pull-in the timelines even more. However, if you internally realized that AGI/ASI is much farther away, but that there is a technology overhang with lots of products possible on existing LLM tech, then you would build up a large applications effort with ambitions to join the Mag7.
Given that intelligence took an awfully long time to emerge from LUCA, that seems implausible. But it's fun to imagine pre-Theia "Silurians". That sort of impact would have scorched earth of any trace or remnant of their existence. It feels as though there must be sufficiently advanced civilizations out there witnessing this exact scenario play out without the necessary technology to stop it. Though that fate would be horrifying.
Another thing to think about is that shortly after the Big Bang (if there was one, Lamda-CDM or similar models holding up), was that shortly after the Big Bang the temperature of the early universe was uniformly 0-100 degrees Celsius. It may have been possible for life to originated in this primordial interstellar medium without even so much as needing a host planet or star! Just life coalescing in space itself.
That early primordial soup, if it existed, could have seeded the whole universe. Most aliens might have matching molecules and chirality if those decisions predate our galaxy.
reply