Hacker News new | past | comments | ask | show | jobs | submit login

Not to be too pessimistic here, but why are we talking about things like this? I get that it’s a fun thing to think about, what we will do when a great artificial superintelligence is achieved and how we deal with it, feels like we’re living in a science fiction book.

But, all we’ve achieved at this point is making a glorified token predicting machine trained on existing data (made by humans), not really being able to be creative outside of deriving things humans have already made before. Granted, they‘re really good at doing that, but not much else.

To me, this is such a transparent attention grab (and, by extension, money grab by being overvalued by investors and shareholders) by Altman and company, that I’m just baffled people are still going with it.




> why are we talking about things like this?

> this is such a transparent attention grab (and, by extension, money grab by being overvalued by investors and shareholders)

Ilya believes transformers can be enough to achieve superintelligence (if inefficiently). He is concerned that companies like OpenAI are going to succeed at doing it without investing in safety, and they're going to unleash a demon in the process.

I don't really believe either of those things. I find arguments that autoregressive approaches lack certain critical features [1] to be compelling. But if there's a bunch of investors caught up in the hype machine ready to dump money on your favorite pet concept, and you have a high visibility position in one of the companies at the front of the hype machine, wouldn't you want to accept that money to work relatively unconstrained on that problem?

My little pet idea is open source machines that take in veggies and rice and beans on one side and spit out hot healthy meals on the other side, as a form of mutual aid to offer payment optional meals in cities, like an automated form of the work the Sikhs do [2]. If someone wanted to pay me loads of money to do so, I'd have a lot to say about how revolutionary it is going to be.

[1] https://www.youtube.com/watch?v=1lHFUR-yD6I

[2] https://www.youtube.com/watch?v=qdoJroKUwu0

EDIT: To be clear I’m not saying it’s a fools errand. Current approaches to AI have economic value of some sort. Even if we don’t see AGI any time soon there’s money to be made. Ilya clearly knows a lot about how these systems are built. Seems worth going independent to try his own approach and maybe someone can turn a profit off this work even without AGI. Tho this is not without tradeoffs and reasonable people can disagree on the value of additional investment in this space.


His paycheck is already dependent on people believing this world view. It’s important to not lose sight of that.


I mean I think he can write his own ticket. If he said "AGI is possible but not with autoregressive approaches" he could still get funding. People want to get behind whatever he is gonna work on. But a certain amount of hype about his work is needed for funding, yes.


Kinda, as long as it’s cool. If he said ‘this is all just plausible text generation’, I think you’d find his options severely limited compared to the alternatives.


Dude he’s probably worth > 1 Billion.


As long as the stock is hot, sure.

If he crashes it by undermining what is making it hot, he’ll be worth a lot less. Depending on how hard he crashes the party, maybe worthless.

Though this party is going hard enough right now, I doubt he alone could do it.


Ilya has never said transformers are the end all be all


Sure but I didn’t claim he said that. What I did say is correct. Here’s him saying transformers are enough to achieve AGI in a short video clip: https://youtu.be/kW0SLbtGMcg


There's a chance that these systems can actually out perform their training data and be better than the sum of their parts. New work out Harvard talks about this idea of "transcendence" https://arxiv.org/abs/2406.11741

While this is a new area, it would be naive to write this off as just science fiction.


It would be nice if authors wouldn't use a loaded-as-fuck word like "transcendence" for "the trained model can sometimes achieve better performance than all [chess] players in the dataset" because while certainly that's demonstrating an impressive internalization of the game, it's also something that many humans can also do. The machine, of course, can be scaled in breadth and performance, but... "transcendence"? Are they trying to be mis-interpreted?


It transcends the training data, I get the usage intended but it certainly is ripe for misinterpretation


The word for that is "generalizes" or "generalization" and it has existed for a very long time.


I've been very confidently informed that these AIs are not AGIs, which makes me wonder what the "General" in AGI is supposed to mean and whether generalization is actually the benchmark for advanced intelligence. If they're not AGI, then wouldn't another word for that level of generalization be more accurate than "generalization"? It doesn't have to be "transcendence" but it seems weird to have a defined step we claim we aren't at but also use the same word to describe a process we know it does. I don't get the nuance of the lingo entirely, I guess. I'm just here for the armchair philosophy


That's trivial though, conceptually. Every regression line transcends the training data. We've had that since Wisdom of Crowds.


"In chess" for AI papers == "in mice" for medical papers. Against lichess levels 1, 2, 5, which use a severely dumbed down Stockfish version.

Of course it is possible that SSI has novel, unpublished ideas.


Also it's possible that human intelligence already reached the most general degree of intelligence, since we can deal with every concept that could be generated, unless there are concepts that are uncompressible and require more memory and processing than our brains could support. In such case being "superintelligent" can be achieved by adding other computational tools. Our pocket calculators make us smarter, but there is no "higher truth" a calculator could let us reach.


Lichess 5 is better than the vast majority of chess players


I think the main point is that from a human intelligence perspective chess is easy mode. Clearly defined, etc.

Think of politics or general social interactions for actual hard mode problems.


The past decade has seen a huge number of problems widely and confidently believed to be "actual hard mode problems" turn out to be solvable by AI. This makes me skeptical that the problems today's experts think are hard aren't easily solvable too.


Hard problems are those for which the rules aren't defined, or constantly change, or don't exist at all. And no one can even agree on the goals.


I'm pretty sure "Altman and company" don't have much to do with this — this is Ilya, who pretty famously tried to get Altman fired, and then himself left OpenAI in the aftermath.

Ilya is a brilliant researcher who's contributed to many foundational parts of deep learning (including the original AlexNet); I would say I'm somewhat pessimistic based on the "safety" focus — I don't think LLMs are particularly dangerous, nor do they seem likely to be in the near future, so that seems like a distraction — but I'd be surprised if SSI didn't contribute something meaningful nonetheless given the research pedigree.


I actually feel that they can be very dangerous. Not because of the fabled AGI, but because

1. they're so good at showing the appearance of being right;

2. their results are actually quite unpredictable, not always in a funny way;

3. C-level executives actually believe that they work.

Combine this with web APIs or effectors and this is a recipe for disaster.


I got into an argument with someone over text yesterday and the person said their argument was true because ChatGPT agreed with them and even sent the ChatGPT output to me.

Just for an example of your danger #1 above. We used to say that the internet always agrees with us, but with Google it was a little harder. ChatGPT can make it so much easier to find agreeing rationalizations.


The ‘plausible text generator’ element of this is perfect for mass fraud and propaganda.


3. Sorry, but how do you know what do they believe in?


My bad, I meant too many C-level executives believe that they actually work.

And the reason I believe that is that, as far as I understand, many companies are laying off employees (or at least freezing hiring) with the expectation that AI will do the work. I have no mean to quantify how many.


The word transformer nor LLM appear anywhere in their announcement

It’s like before the end of WWII the world sees the US as a military super power , and THEN we unleash the atomic bomb they didn’t even know about

That is Ilya. He has the tech. Sam had the corruption and the do anything power grab


> I don't think LLMs are particularly dangerous

“Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.

I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?

To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.


That's what Safe Super intelligence misses. Superintelligence isn't practically more dangerous. Super stupidity is already here, and bad enough.


They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc

But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.


We are forgetting the visual element

The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .

We will all be brainwashed zombies


> They reduce the marginal cost of producing plausible content to effectively zero.

This is still "LLMs as a tool for bad people to do bad things" as opposed to "A(G)I is dangerous".

I find it hard to believe that the dangers everyone talks about is simply more propaganda.


There are plenty of tools which are dangerous while still requiring a human to decide to use them in harmful ways. Remember, it’s not just bad people who do bad things.

That being said, I think we actually agree that AGI doomsday fears seem massively overblown. I just think the current stuff we have is dangerous already.


I actually do doubt that LLMs will create AGI but when these systems are emulating a variety of human behaviors in a way that isn't directly programmed and is good enough to be useful, it seems foolish to not take notice.

The current crop of systems is a product of the transformers architecture - an innovation that accelerated performance significantly. I put the odds another changing everything but I don't think we can entirely discount the possibility. That no one understands these systems cuts both ways.


> Not to be too pessimistic here, but why are we talking about things like this

I also think that we merely got a very well compressed knowledge base, therefore we are far from super intelligence, and so-called safety sounds more Orwellian than having any real value. That said, I think we should take the literal meaning of what Ilya says. His goal is to build a super intelligence. Given that, albeit a lofty goal, SSI has to put safety in place. So, there, safe super intelligence


An underappreciated feature of a classical knowledge base is returning “no results” when appropriate. LLMs so far arguably fall short on that metric, and I’m not sure whether that’s possibly an inherent limitation.

So out of all potential applications with current-day LLMs, I’m really not sure this is a particularly good one.

Maybe this is fixable if we can train them to cite their sources more consistently, in a way that lets us double check the output?


Likewise, i'm baffled by intelligent people [in such denial] still making the reductionist argument about token prediction being a banal ability. It's not. It's not very different than how our intelligence manifest.


> It's not very different than how our intelligence manifest.

[citation needed]


Search for "intelligence is prediction/compression" and you'll find your citations.


AlphaGo took us from mediocre engines to outclassing the best human players in the world within a few short years. Ilya contributed to AlphaGo. What makes you so confident this can't happen with token prediction?


I'm pretty sure Ilya had nothing to do with AlphaGo, which came from DeepMind. He did work for Google Brain for a few years before OpenAI, but that was before Brain and DeepMind merged. The AlphaGo lead was David Silver.


If solving chess already created the Singularity, why do we need token prediction?

Why do we need computers that are better than humans at the game of token prediction?


Because it can be argued that intelligence is just highly sophisticated token prediction?


Anything "can be argued". (Just give me something interesting enough, and I'll show ya.) It could probably also "be argued" that intelligence is an angry cabbage.


We already have limited "artificial superintelligences". A pocket calculator is better at calculating than the best humans, and we certainly put calculators to good use. What we call AIs are just more generic versions of tools like pocket calculators, or guns.

And that's the key, it is a tool, a tool that will give a lot of power to whoever is controlling it. And that's where safety matters, it should be made so that it helps good guys more than it helps bad guys, and limit accidents. How? I don't know. Maybe people at SSI do. We already know that the 3 laws of robotics won't work, Asimov only made them to write stories about how broken they are :)

Current-gen AIs are already cause for concern. They are shown to be good at bullshitting, something that bad people are already taking advantage of. I don't believe in robot apocalypse, technological singularities, etc... but some degree of control, like we do with weapons is not a bad thing. We are not there yet with AI, but we might be soon.


Too many people are extrapolating the curve to exponential when it could be a sigmoid. Lots of us got too excited and too invested in where "AI" was heading about ten years ago.

But that said, there are plenty of crappy, not-AGI technologies that deserve consideration. LLMs can still make for some very effective troll farms. GenAI can make some very convincing deepfakes. Drone swarms, even without AI, represent a new dimension of capabilities for armies, terrorist groups or lone wolves. Bioengineering is bringing custom organisms, prions or infectious agents within reach of individuals.

I wish someone in our slowly-ceasing-to-function US government was keeping a proper eye on these things.


Even if LLM-style token prediction is not going to lead to AGI (as it very likely won't) it is still important to work on safety. If we wait until we are at the technology that will for sure lead to AGI then it is very likely that we won't have sufficient safety before we realize that it is important.


Agree up til last paragraph: how's Altman involved? Otoh Sutskever is a true believer so that explains his Why


To be clear I was just bunching high profile AI founders and CEOs that can’t seem to stop talking about how dangerous the thing they’re trying to build is together. I don’t know (nor care) about Ilyas and Altmans current relationship.


> But, all we’ve achieved at this point is making a glorified token predicting machine trained on existing data (made by humans), not really being able to be creative outside of deriving things humans have already made before. Granted, they‘re really good at doing that, but not much else.

Remove token, and that's what we humans do.

Like, you need to realize that neural networks came to be because someone had the idea to mimic our brains' functionality, and see where that lead to.

Many skeptics at the beginning like you discredited the inventor, but he was proved wrong. LLMs shown how much more than your limited description they can achieve.

We mimicked birds with airplanes, and we can outdo them. It's actually in my view very short sighted to say we can't just mimic brains and outdo them. We're there. ChatGPT is the initial little planes that flew close to the ground and barely stayed up


Except it really, actually, isn’t.

People don’t ‘think’ the same way, even if some part of how humans think seems to be somewhat similar some of the time.

That is an important distinction.

This is the hype cycle.



Egos man. Egos.


I’m a miserable cynic at a much higher level. This is top level grifting. And I’ve made a shit ton of money out of it. That’s as far as reality goes.


lol same. Are you selling yet?


When QQQ and SMH close under the 200 day moving average I'll sell my TQQQ and SOXL repectively. Until then, party on! It's been a wild ride.


Mostly holding on still. Apple just bumped the hype a little more and gave it a few more months despite MSFT’s inherent ability to shaft everything they touch.

I moved about 50% of my capital back into ETFs though before WWDC in case they dumped a turd on the table.


> glorified token predicting machine trained on existing data (made by humans)

sorry to disappoint, but human brain fits the same definition


Sure.

> Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

> To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.

https://aeon.co/essays/your-brain-does-not-process-informati...


What are you talking about? Do you have any actual cognitive neuroscience to back that up? Have they scanned the brain and broken it down into an LLM-analogous network?


If you genuinely believe your brain is just a token prediction machine, why do you continue to exist? You're just consuming limited food, water, fuel, etc for the sake of predicting tokens, like some kind of biological crypto miner.


Genetic and memetic/intellectual immortality, of course. Biologically there can be no other answer. We are here to spread and endure, there is no “why” or end-condition.

If your response to there not being a big ending cinematic to life with a bearded old man and a church choir, or all your friends (and a penguin) clapping and congratulating you is that you should kill yourself immediately, that’s a you problem. Get in the flesh-golem, shinzo… or Jon Stewart will have to pilot it again.


I'm personally a lot more than a prediction engine, don't worry about me.

For those who do believe they are simply fleshy token predictors, is there a moral reason that other (sentient) humans can't kill -9 them like a LLaMa3 process?


Morality is just what worked as set of rules for groups of humans to survive together. You can try to kill me if you want, but I will try to fight back and society will try to punish you.

And all of the ideas of morality and societal rules come from this desire to survive and desire to survive exists because this is what natural selection obviously selects for.

There is also probably a good explanation why people want to think that they are special and more than prediction engines.


Yes, specifically that a person's opinions are never justification for violence committed against them, no matter how sure you might be of your righteousness.


But they've attested that they are merely a token prediction process; it's likely they don't qualify as sentient. Generously, we can put their existence on the same level as animals such as cows or chickens. So maybe it's okay to terminate them if we're consuming their meat?


"It is your burden to prove to my satisfaction that you are sentient. Else, into the stew you go." Surely you see the problem with this code.

Before you harvest their organs, you might also contemplate whether the very act of questioning one's own sentience might be inherent positive proof.

I'm afraid you must go hungry either way.


> "It is your burden to prove to my satisfaction that you are sentient. Else, into the stew you go." Surely you see the problem with this code.

It's the opposite; I've always assumed all humans were sentient, since I personally am, but many people in this comment section are eagerly insisting they are, in fact, not sentient and no more than token prediction machines.

Most likely they're just wrong, but I can't peer into their mind to prove it. What if they're right and there's two types of humans, ones who are merely token predictors, and ones who aren't? Now we're getting into fun sci-fi territory.


And how would we discern a stochastic parrot from a sentient being on autopilot?

So much of what we do and say is just pattern fulfillment. Maybe not 100%, on all days.


Why would sentient processes deserve to live? Especially non sentient systems who hallucinate their own sentience? Are you arguing that the self aware token predictors should kill and eat you? They crave meat so they can generate more tokens.

In short, we believe in free will because we have no choice.


Well, yes. I won't commit suicide though, since it is an evolutionarily developed trait to keep living and reproducing since only the ones with that trait survive in the first place.


If LLMs and humans are the same, should it be legal for me to terminate you, or illegal for me to terminate an LLM process?


What do you mean by "the same"?

Since I don't want to die I am going to say it should be illegal for you to terminate me.

I don't care about an LLM process being terminated so I have no problem with that.


It's a cute generalization but you do yourself a great disservice. It's somewhat difficult to argue given the medium we have here and it may be impossible to disprove but consider that in first 30 minutes of your post being highly visible on this thread no one had yet replied. Some may have acted in other ways.. had opinions.. voted it up/down. Some may have debated replying in jest or with a some related biblical verse. I'd wager a few may have used what they could deduce from your comment and/or history to build a mini model of you in their heads, and using that to simulate the conversation to decide if it was worth the time to get into such a debate vs tending to other things.

Could current LLM's do any of this?


I’m not the OP, and I genuinely don’t like how we’re slowly entering the “no text in internet is real” realm, but I’ll take a stab at your question.

If you made an LLM to pretend to have a specific personality (e.g. assume you are a religious person and you’re going to make a comment in this thread) rather than “generic catch-all LLM”, they can pretty much do that. Part of Reddit is just automated PR LLMs fighting each other, making comments and suggesting products or viewpoints, deciding on which comment to reply and etc. You just chain bunch of responses together with pre-determined questions like “given this complete thread, do you think it would look organic if we responded with a plug to a product to this comment?”.

It’s also not that hard to generate these type of “personalities”, since you can use a generic one to suggest you a new one that would be different from your other agents.

There are also Discord communities that share tips and tricks for making such automated interactions look more real.


These things might be able to produce comparable output but that wasn't my point. I agree that if we are comparing ourselves over the text that gets written then LLM's can achieve super intelligence. And writing text can indeed be simplified to token predicting.

My point was we are not just glorified token predicting machines. There is a lot going on behind what we write and whether we write it or not. Does the method matter vs just the output? I think/hope it does on some level.


See, this sort of claim I am instantly skeptical of. Nobody has ever caught a human brain producing or storing tokens, and certainly the subjective experience of, say, throwing a ball, doesn't involve symbols of any kind.


> Nobody has ever caught a human brain producing or storing tokens

Do you remember learning how to read and write?

What are spelling tests?

What if "subjective experience" isn't essential, or is even just a distraction, for a great many important tasks?


Entirely possible. Lots of things exhibit complex behavior that probably don't have subjective experience.

My point is just that the evidence for "humans are just token prediction machines and nothing more" is extremely lacking, but there's always someone in these discussions who asserts it like it's obvious.


Any output from you could be represented as a token. It is a very generic idea. Ultimately whatever you output is because of chemical reactions that follow from the input.


It could be represented that way. That's a long way from saying that's how brains work.

Does a thermometer predict tokens? It also produces outputs that can be represented as tokens, but it's just a bit of mercury in a tube. You can dissect a thermometer as much as you like and you won't find any token prediction machinery. There's lots of things like that. Zooming out, does that make the entire atmosphere a token prediction engine, since it's producing eg wind and temperatures that could be represented as tokens?

If you need one token per particle then you're admitting that this is task is impossible. Nobody will ever build a computer that can simulate a brain-sized volume of particles to sufficient fidelity. There is a long, long distance from "brains are made of chemicals" to "brains are basically token prediction engines."


The argument that brains are just token prediction machines is basically the same as saying “the brain is just a computer”. It’s like, well, yes in the same way that a B-21 Raider is an airplane as well as a Cessna. That doesn’t mean that they are anywhere close to each other in terms of performance. They incorporate some similar basic elements but when you zoom out they’re clearly very different things.


But we are bringing it up in regards to what people are claiming is a "glorified next token predictor, markov chains" or whatever. Obviously LLMs are far from humans and AGI right now, but at the same time they are much more amazing than a statement like "glorified next token predictor" lets on. The question is how accurate to real life the predictor is and how nuanced it can get.

To me, the tech has been an amazing breakthrough. The backlash and downplaying by some people seems like some odd type of fear or cope to me.

Even if it is not that world changing, why downplay it like that?


To be fair my analogy works if you want to object to ChatGPT being called a glorified token prediction machine. I just don’t agree with hyperbolic statements about AGI.


There's so many different statements everywhere, that it's hard to understand what someone is specifically referring to. Are we thinking of Elon Musk who is saying that AGI is coming next year? Are we thinking of people who believe that LLM like architecture could reach AGI in 5 to 10 years given tweaks, scale and optimisations? Are we considering people who believe that some other arch breakthrough could lead to AGI in 10 years?


>> Are we thinking of people who believe that LLM like architecture could reach AGI in 5 to 10 years given tweaks, scale and optimisations?

Yep, that’s exactly who I’m talking about! I’m pretty sure Sam Altman is in that camp.


It's no mystery, AI has attracted tons of grifters trying to cash out before the bubble pops, and investors aren't really good at filtering.


Well said.

There is a mystery though still - how many people fall for it and then stay fell, and how long that goes on for. People who've followed directly a similar pattern play itself out often many times, and still, they go along.

It's so puzzlingly common amongst very intelligent people in the "tech" space that I've started to wonder if there isn't a link to this ambient belief a lot of people have that tech can "change everything" for the better, in some sense. As in, we've been duped again and again, but then the new exciting thing comes along... and in spite of ourselves, we say: "This time it's really the one!"

Is what we're witnessing simply the unfulfilled promises of techno-optimism crashing against the shores of social reality repeatedly?


Why are you assigning moral agency where there may be none? These so called "grifters" are just token predictors writing business plans (prompts) with the highest computed probability of triggering $ + [large number] token pair from venture capital token predictors.


Are you claiming Ilya Sutskever is a grifter?


I personally wouldn’t go that far, but I would say he’s at least riding the hype wave to get funding for his company, which, let’s be honest, nobody would care about if we weren’t this deep into the AI hypecycle.


I got flagged for less. Anyways, nice sum up of the current AI game!


Because it's likely soon LLMs will be able to teach themselves and surpass humans. No consciousness, no will. But somebody will have their power. Dark government agencies and questionable billionaires. Who knows what will it enable them to do.

https://en.wikipedia.org/wiki/AlphaGo_Zero


Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?

Not sure how a Go engine really applies. Do you consider cars superintelligent because they can move faster than any human?


I'm with you here, but it should be noted that while the combustion engine has augmented our day to day lives for the better and our society overall, it's actually a great example of a technology that has been used to enable the killing of 100s of millions of people by those exact types of shady institutions and individuals the commenter made reference to. You don't need something "super intelligent" to cause a ton of harm.


Yes just like the car and electric grid.


> Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?

We're just past the Chicago pile days of LLMs [1]. Sutsever believes Altman is running a private Manhattan project in OpenAI. I'd say the evidence for LLMs having superintelligence capability is on shakier theoretical ground today than nuclear weapons were in 1942, but I'm no expert.

Sutsever is an expert. He's also conflicted, both in his opposition to OpenAI (reputationally) and pitching of SSI (financially).

So I'd say there appears to be a disputed but material possibility of LLMs achieving something that, if it doesn't pose a threat to our civilisation per se, does as a novel military element. Given that risk, it makes sense to be cautious. Paradoxically, however, that risk profile calls for strict regulation approaching nationalisation. (Microsoft's not-a-taker takeover of OpenAI perhaps providing an enterprising lawmaker the path through which to do this.)

[1] https://en.wikipedia.org/wiki/Chicago_Pile-1


Likely according to who?


Whoever needs money from investors who don't understand LLMs.


ha-ha!!!!


What's the connection between LLMs and AlphaGo?


They both use deep learning and gradient descent. Also both were made possible by availability of GPUs.


I'm aware of that, but I don't think this immediately lends itself to a prognosis of the form "LLMs and AlphaGo are deep learning neural networks running on GPUs; AlphaGo was tremendously successful in chess => LLMs will soon surpass humans".

I can consider the possibility that something coming out of GPU-based neural networks might one day surpass humans in intelligence, but I also believe there's reason to doubt it will be based on today's LLM architecture.


Maybe the connection the GP saw was in terms using their other instances for training. It is not exactly the same process, but there seems to be a hint of similarity to my - sadly - untrained eye.


Well, an entire industry of researchers, which used to be divided, is now uniting around calls to slow development and emphasize safety (like, “dissolve companies” emphasis not “write employee handbooks” emphasis). They’re saying, more-or-less in unison, that GPT3 was an unexpected breakthrough in the Frame Problem, based on Judea Pearl’s prescient predictions. If we agree on that, there are two options:

1. They’ve all been tricked/bribed by Sam Altman and company (which btw this is a company started against those specific guys, just for clarity). Including me, of course.

2. You’re not as much of an expert in cognitive science as you think you are, and maybe the scientists know something you don’t.

With love. As much love as possible, in a singular era


Are they actually united? Or is this the ai safety subfaction circling the wagons due to waning relevance in the face of not-actually-all-that-threatening ai?


I personally find that summary of things to be way off the mark (for example, hopefully "the face" you reference isn't based on anything that appears in a browser window or in an ensemble of less than 100 agents!) but I'll try to speak to the "united" question instead.

1. The "Future of Life" institute is composed of lots of very serious people who recently helped get the EU "AI Act" passed this March, and they discuss the "myriad risks and harms AI presents" and "possibly catastrophic risks". https://newsletter.futureoflife.org/p/fli-newsletter-march-2...

2. Many researchers are leaving large tech companies, voicing concerns about safety and the downplaying of risks in the name of moving fast and beating vaguely-posited competitors. Both big ones like Hinton and many, many smaller ones. I'm a little lazy to scrape the data together, but it's such a wide phenomenon that a quick Google/Kagi should be enough for a vague idea. This is why Anthropic was started, why Altman was fired, why Microsoft gutted their AI safety org, and why Google fired the head of their AI ethics team. We forgot about that one cause it's from before GPT3, but it doesn't get much clearer than this:

> She co-authored a research paper which she says she was asked to retract. The paper had pinpointed flaws in AI language technology, including a system built by Google... Dr Gebru had emailed her management laying out some key conditions for removing her name from the paper, and if they were not met, she would "work on a last date" for her employment. According to Dr Gebru, Google replied: "We respect your decision to leave Google... and we are accepting your resignation."

3. One funny way to see this happening is to go back to seminal papers from the last decade and see where everyone's working now. Spoiler alert: not a lot of the same names left at OpenAI, or Anthropic for that matter! This is the most egregious I've found -- the RLHF paper: see https://arxiv.org/pdf/2203.02155

3. Polling of AI researchers shows a clear and overhelming trend towards AGI timelines being moved up significantly. It's still a question deeply wrapped up in accidental factors like religious belief, philosophical perspective, and general valence as a person, so I think the sudden shift here should tell you a lot. https://research.aimultiple.com/artificial-general-intellige...

The article I just linked actually has a section where they collect caveats, and the first is this Herbert Simon quote from 1965 that clearly didn't age well: "Machines will be capable, within twenty years, of doing any work a man can do.” This is a perfect example of my overall point! He was right. The symbolists were right, are right, will always be right -- they just failed to consider that the connectionists were just as right. The exact thing that stopped his prediction was the frame problem, which is what we've now solved.

Hopefully that makes it a bit clearer why I'm anxious all the time :). The End Is Near, folks... or at least the people telling you that it's definitely not here have capitalist motivations, too. If you count the amount of money lost and received by each "side" in this "debate", I think it's clear the researcher side is down many millions in lost salaries and money spent on thinktank papers and Silicon Valley polycule dorms (it's part of it, don't ask), and the executive side is up... well, everything, so far. Did you know the biggest privately-funded infrastructure project in the history of humanity was announced this year? https://www.datacenterdynamics.com/en/opinions/how-microsoft...


I would read the existence of this company as evidence that the entire industry is not as united as all that, since Sutskever was recently at another major player in the industry and thought it worth leaving. Whether that's a disagreement between what certain players say and what they do and believe, or just a question of extremes... TBD.


He didn't leave because of technical reasons, he left because of ethical ones. I know this website is used to seeing this whole thing as "another iPhone moment" but I promise you it's bigger than that. Either that or I am way more insane than I know!

E: Jeez I said "subreddit" maybe I need to get back to work


I‘d say there’s a third option - anyone working in the space realized they can make a fuckton of money if they just say how „dangerous“ the product is, because not only is it great marketing to talk do that, but you might also get literal trillions of dollars from the government if you do it right.

I don’t have anything against researchers, and I agree I know a lot less about AI than they do. I do however know humans, and not assuming they’re going to take a chance to get filthy rich by doing something so banal is naive.


This is well reasoned, and certainly happens, but I definitely think there’s strong evidence that there are, in fact, true believers. Yudkowsky and Hinton for one, but in general the shape of the trend is “rich engineers leave big companies because of ethical issues”. As you can probably guess, that is not a wise economic decision for the individual!


We don't agree on that. They're just making things up with no real scientific evidence. There are way more than 2 options.


What kind of real scientific evidence are you looking for? What hypotheses have they failed to test? To the extent that we're discussing a specific idea in the first place ("are we in a qualitatively new era of AI?" perhaps), I'm struggling to imagine what your comment is referencing.

You're of course right that there are more than two options in an absolute sense, I should probably limit the rhetorical flourishes for HN! My argument is that those are the only supportable narratives that answer all known evidence, but it is just an argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: