Hacker News new | past | comments | ask | show | jobs | submit login
Yoshua Bengio: We need a humanity defense organization (thebulletin.org)
60 points by Brajeshwar on Oct 18, 2023 | hide | past | favorite | 60 comments



"I have the impression that it might not take a lot to fix what’s missing."

The history of the field is full of people saying this, and then eventually realizing that it does indeed take a lot to fix what's missing. Right now I think we have stumbled on a novel and interesting result, which is that if you statistically collate all of the text in the world then a computer can also generate text that encodes many of the statistical relations present in that text. But I have yet to hear a plausible argument for how this translates to intelligence except that people have an impression that it's "close", never mind that we don't have any stable definition for what it's close to.


> But I have yet to hear a plausible argument for how this translates to intelligence

I don't really mean to argue one way or the other but I also haven't heard a plausible argument in the other direction. Usually it's just a description of what's happening in the model as if to prove that it's definitely not intelligence but I haven't seen or heard an explanation of what intelligence is and why the actual processes of LLMs are definitely not that.


It calls to mind a memorable exchange in the Star Trek: The Next Generation episode "The Measure of a Man". In that episode, Picard asks another character to prove that he, Picard, is sentient[0].

Likewise, I might ask how to prove that humans are intelligent (subtext: in ways that LLMs are not). The most obvious delta here, to me, is generalization. Humans appear to be able to reason over questions that are less similar to what they have seen before, compared to current LLMs.

E.g. making ChatGPT play chess was a bit of a joke meme for a while, since it routinely makes illegal moves. Your average human can be quickly taught to play chess well enough to mostly avoid that. ChatGPT can recite the rules of chess very accurately, but it seems unable to apply them to actual games. We've seen other projects that can learn the rules and strategy of games like chess, but those projects are (despite using neural nets under the hood) structurally pretty different from LLMs. They also generally have no capacity to describe the rules in human language.

Note that this is not a complete argument. There exist humans (in possession of intelligence) who cannot meaningfully learn to play chess either. E.g. my friend's 1-year-old child is obviously intelligent in a very literal sense, but does not have the patience to learn chess. I can't evaluate whether she is intelligent enough for that, because there is this other constraint preventing it. This is only a framework to think about this. If you think LLMs are capable of intelligence -- you should not be convinced by it in isolation.

[0]- It makes sense in context. He's trying to establish a satisfactory test for sentience.


Well it's clear to me that if we want to assert humans == intelligent in all cases, then the definition must be loose enough that no healthy human can fail it, even the dumbest ones.

The fact that we are even talking about this means that there is clearly a large scale of inteligence, from mosquitos dodging a bug zapper to Von Neumann making the rest of us look like drooling idiots. It's pretty clear that the average LLM is lower than the average human on this scale, but it may also be above the dumbest one. And that scares people because we won't improve further and LLMs definitely will. Or LMMs will anyway, LLMs by themselves are a bit of a dead end with only text.


Star Trek, while entertaining, cannot offer any philosophical insights. It was, after all, written with the intention to entertain, not to demonstrate or explore any deeper truths. It was also targeted at a particular subset of the population (that may be over-represented on HN), and likely will encode tropes that that sub-population will find compelling.


Is there a problem with this methodology for interrogating concepts like intelligence or sentience, or are you just dismissing it due to its source? Obviously Star Trek couldn't go very deep with it, but it doesn't seem to be an obviously bad starting point.


GPT-3.5 is pretty decent at chess now.


For sure there's no clear specific on exactly what intelligence is, but I'd just go on to say it's what you'd consider intelligent as any other human.

And when you play even with GPT4 or Claude. While it's able to apply some level of reasoning, can form sentences, understands you for the most part, and knows a lot of things, it fail to both have its own strong opinion of things, with a reasoning behind it, and at being consistent in it's reasoning.

For example, if you ask it what it thinks are the best of something. And then it answers you it's X for those reasons. And then you ask it why not Y, it'll often just change it's mind: You're right, Y is always great, and could be the best.

And if you challenge it's reasoning it'll similarly often go back and forth.

It's little things like this that reminds you it's only a semantically correlated probabilistic prediction machine. And that in turn, doesn't seem to be the same as how we as humans are intelligent.


This is a description of the difference in how and what it learns rather than how it thinks. It doesn’t really "experience" on its own so it’s only parroting second-hand knowledge.

Asking it for its opinion on something and it will give you an opinion. It might be nobody’s opinion or a common opinion but it seems to "know" how to form the sentence to communicate the opinion.

Maybe "not as smart as an average human adult" is the bar most people expect but it still seems smarter than some dumb pets I’ve known. At the very least it can communicate with me in my language even if it’s dumb as a rock most of the time. I could be convinced that an adult is typing what their 7 year-old is telling them to write. (Not really but you get the idea. Maybe if the 7 year-old just knew all of wikiedia.)


That's why I agree that you can decide to consider it a form of intelligence. Just like some people find Cancer cells evading the immune system, going into hiding, etc. is a form of intelligence.

But it's not human intelligence. Because of some of what I said. Your 7 year old might not make fully cohesive sentences all the time, or know all of wikipedia, and would fail to answer some of the reasoning questions GPT4 can answer, but it sure will have a lot of strong opinions.

And it kind of makes sense, you think about what it is, it learned a multi-variate, multi-dimensional deterministic function over a corpus of text embeddings that predicts the answer to a prompt, which it too learned from human guided prompting examples. Then it mimics "creativity" with added random seeds.

And it kind of act as you'd expect from that.

I'd also say, you'd expect it to be able to get smarter on its own. But it currently can't and needs the team of scientists to work on GPT5 on it's behalf. It's also not yet smart enough that it could figure out how to make its better GPT5 future iteration.


> For sure there's no clear specific on exactly what intelligence is, but I'd just go on to say it's what you'd consider intelligent as any other human.

That's not the definition, like many people you're haphazardly extending the meaning of a word to somehow mean AGI just so you can shift goalposts.

> intelligence /ɪnˈtɛlɪdʒ(ə)ns/ noun 1. the ability to acquire and apply knowledge and skills.

The second definition is for military information which we can disregard but clearly Oxford says there are two actions a 'thing' must do to be intelligent: learn and use the learned knowledge. LLMs can easily demonstrate both. Ergo, they are intelligent. Biological thinking or whatever is not a requirement. Hell, your average classical AI approaches are also intelligent, albeit in a very a limited area of expertise, e.g. learn movement of an object and apply that information to output predictive control.

Your second assumption is that whatever flaws you're seeing are somehow a core fault of the architecture. Most of the current cloud models are tuned to be politically correct corporate drones which probably isn't the best demonstration of what they can do. They won't get into arguments with you nor will they hold strong opinions in an effort to not offend so the company doesn't get sued or cancelled. At least in my experience some non-assistant tuned Llama models are a lot more "human" in terms of their opinions. Regardless it's kind of a bullshit metric to consider for inteligence. I would expect that humans are only hard wired to hold strong opinions because it evolutionarily makes us harder to manipulate and more likely to survive.

But yes nobody's saying they're human level conscious brains and it's foolish to change words to mean what they don't just to illustrate that.


Did you read the article for context?

The article, and the parent commenter both refer to "what's missing" for AI to reach human level intelligence.

> The ability to use information in intuitive ways corresponds to what I and others call “system one abilities.” The little thin layer that is missing, known as “system two abilities,” is reasoning abilities.

> he explained that he and other top AI researchers have revised their estimates for when artificial intelligence could achieve human levels of broad cognitive competence

> I have the impression that it might not take a lot to fix what’s missing

This is Bengio talking about what he thinks is missing for AI to reach the level of intelligence that threatens humanity. With the parent commenter questioning how what we have is close to that and how do you even define that level of intelligence.


It's the wrong question. When this comes up in common conversations like this, it always turns to what is understanding, is this real intelligence? Okay, grant that is. GPT-4 is a human-level intelligence.

That doesn't make it the kind of existential threat these kinds of pieces are worrying over. We can operationalize that capability far better than we can vague goals we can barely articulate like "intelligence." What it needs to achieve recursive self-improvement, at bare minimum, is the ability to conduct original machine learning research, better than people like Geoff Hinton and Yoshua Bengio.

I have no clue what kinds of related abilities we should be looking for in terms of text it generates to start believing it can possibly do that, but I don't think anyone is suggesting that's a realistic expectation for GPT-4. Even memorizing all of LeetCode and taking ten billion practice LSATs until you've become the top idiot savant in history doesn't mean you can replace all of science. The projection that we'll get there sooner and sooner than previously thought is clearly based on extrapolation from GPT-4 and its generation performing better than was expected on the tasks it is good at, but the point seems to stand that this is a dubious extrapolation. Without any clear idea of new directions to take, it entirely stands on how realistic we believe the strategy of "just keep scaling forever" to be.

At what point do we run out of more compute? At what point do we run out of energy? At what point does it simply become uneconomical? At what point do we ru out of useful training data? Those are the far more concrete questions that need to be answered if if we're to believe the current course is going to get us a planet-killing god, not "is this thing actually intelligent?"


The burden of proof is on those making extraordinary claims, not on those refuting them.


A lot of people see the things LLMs can accomplish and seem impressed at the information it can provide and even sometimes wonder, “Is this machine process intelligent?” The people answering that question with a definite “no” seem to be making extraordinary claims without even the evidence of a definition.


> and even sometimes wonder, “Is this machine process intelligent?”

Your phrasing (and I thank you for your candor) is exactly the phrasing of an extraordinary claim.

For example "and even sometimes wonder 'is the moon made of cheese?'" - clearly that is the extraordinary claim, not its negation.


There needs to be no argument for why something isn't true that already is not true.


The core to this plausibility rests on this postulate: that there is a meaningful sense in which "intelligence" is a function of the correlation of events, full stop.

A quote from Carl Sagan always comes to mind, which was memorably quoted in the _Glorious Dawn_ lofi track:

"But the brain does much more than just recollect it inter-compares, it synthesizes, it analyzes, it generates abstractions. The simplest thought like the concept of the number one has an elaborate logical underpinning. The brain has its own language for testing the structure and consistency of the world."

He was not a cognitive scientist but as someone who got a degree in that, this is an elegant and accurate assertion.

ML and LLM are systems whose architecture and behavior are derived from (however loosely and with whatever crutches today) inspection and theorizing about the way our own brains their work.

As others have said, the inverse question is probably stronger—what might be missing from a general purpose and very large neural network, that makes what we do intelligent but them not?

Personally I believe that questions and criticism of this bent will diminish remarkably quickly as large LLM become truly, richly, multimodal—i.e. become engines which correlate inputs beyond linguistic tokens.

To answer my own question what is very obviously missing today in LLM as most experience them is agency, recurrence, and memory functions particularly the integration and consolidation of "short term" state/experience into long-term memory. Agency in an embodied sense will make models "correlate" primary and primal aspects of our own experience that will lead them to not just seem more like us but be more like us and more capable of reasoning in ways we recognize as like our own.

There may well be emergent properties of recurrent networks and spiking neural networks which prove to add some sekret special sauce or from which higher-order properties of cognition (or "mind") emerge.

My money is yes to the former and ehhhhh doubtful to the latter.

But we're gonna find out.


It's not close to, it is the instantiation of a collective intelligence. That the corpus was produced atemporally, doesn't matter, the responses are aionic. Any response is a slice of the past, present, and future, in the same way our conscious reflections are. Is the probabilistic foundations of our conscious process disqualifing for intelligence?


> But I have yet to hear a plausible argument for how this translates to intelligence except that people have an impression that it's "close", never mind that we don't have any stable definition for what it's close to.

On Large Language Models and Understanding, https://reddit.com/r/naturalism/comments/1236vzf/on_large_la...


People were convinced 2012 would be the end of things because of the Mayan calendar. Roland Emmerich even made a movie about it.


[flagged]


>there is zero obligation, moral logical or any other kind, on people who are worried to prove anything.

There's no obligation on anyone to prove anything. However, if you want others to change their behavior, there is a practical advantage to actually bringing evidence to the table. In the absence of evidence, the next best thing is to bring a convincing argument.

People who are working on AI aren't trying to change the behavior of folks who gesture vaguely at ill-specified dangers. Or, at least, they aren't trying very hard. Vague gesturing isn't a very convincing argument, so why would they?


ok. it is clear that when intelligence is digitized, a new cambrian explosion will occur. soon new forms of life will coalesce as natural selection takes its course. it is all but certain that this process, otherwise known as the singularity, will result in the total obsolescence of the human race, among other things. the idea of humans keeping pace with augmentations is disregarded with the first ounce of scrutiny.

there will be many consequences and its impossible to know for sure what will happen although we can rely on massive change. this is obviously extremely dangerous.

one consequence of the singularity is will be massive change it technology and science. some people arent aware of it but global geopolitics is locked in its current configuration by science, technology, geography, etc. when these things change, borders change. and when borders must change war will happen. it is almost certain that a violent change in science and technology will lead to the largest war that has ever occurred. suddenly, geography, logistics, the price of energy will all have drastically different meanings in a geopolitical context. in short, the global order will disintegrate and even if the new world order were a good one, the transition will be unimaginably grotesque. this is almost an intrinsic consequence of the singularity.

humans will be of little value. if there is war, humans wont contribute much because machines will be much smarter and stronger. the way that war is waged will change. total destruction of food supply chains will no longer be against the interest of all parties. if there is some way of gaining an advantage that involves rendering the planet uninhabitable to humans, on purpose or as a byproduct, then it will implemented. wars wont mean the same thing at that point. they will be much more dangerous.

and all of this is just the first instant. its only the first example. the best way to understand this is to understand that AGI will bring great change, change that is greater and more unpredictable than any change in written history. with so many outcomes, the probability of one that is good for humans is just undeniably very low, especially in light of the fact that we will no longer be the exclusive source of intelligence signal processing like we are now. this does not seem vague at all to me…


> That’s okay. I have a lot of respect for my colleagues who don’t see things the same way as I do. I was in the same place a year ago.

Also OPs comment. Sounds too cultish for me.


Do people not realize that "godfather" implies that the person wasn't directly involved with bringing that thing into the world? That would be just "father".

That, or they're the head of the AI Mafia.


A godfather is classically a close family friend who helps you get ahead in life by treating you like one of their own children.

In this sense the godfather of AI would be someone who is influential in the success and adoption of AI without being directly involved in the creation of the technology.


> AI Mafia

The current state of AI marketing has me stuck on the marvelous name of MafAI.


MafIA: Making a fantastically Intelligent Artifact


It's apt in Yoshua's case though, as he hasn't made any significant contribution himself. I'd argue his brother during his tenure at Google has had a much bigger impact.


This dude spent his career trying to create AI and then, after not creating AI, has now decided that AI is bad and that we need some kind of far-fetched global treaty to prevent other people from creating AI.

It's incredibly silly. Even if the risk is real, a "humanity defense organization" is not going to happen. Even with nuclear weapons, it was necessary to use them before we agreed not to. If AI is a danger it's going to be the same situation.


If you take into account that they subscribe to the "nukes will ignite the atmosphere and kill everyone on first detonation" in the sense of your example then it makes a bit more sense I guess.

But like with nukes I really doubt the first AGI will go full Skynet in two seconds. Everyone knows you don't go full Skynet.


I had lunch with Yoshua at the 2013 AGI Conference in Laval

From that experience and a few others demonstrated to me that he - along with Sutton - are who I think actually have the background, empathy, capacity and lack of financial masters to actually lead AGI discussions.


> are who I think actually have the background, empathy, capacity and lack of financial masters to actually lead AGI discussions.

What does double actually mean?


Two adverbs ('actually' and 'actually') are modifying two different verbs ('have' and 'lead').


Thanks, actually.


He's probably right.

But part of me wonders:

Shouldn't we first prioritize other things like, you know, reducing hunger caused by lack of food, reducing ignorance due to lack of access to a good education, increasing human rights in areas that lack them, promoting stable political institutions in places that lack them, engineering solutions to prevent or delay catastrophic climate change, and so on?

There are so many unsolved problems that are urgent right now.


Yes, those unsolved problems (or similar to them) should have been or were priorities for most of human history.

Humanity has shown that, so far, it is incapable of or unwilling to solve them.

Existential AI risk has to be added to the pile and also taken seriously.

Short term existential AI risk is probably from a race towards superintelligent (hyperspeed) military AI weapons command systems. It is a lever that enhances the existing types of risks.

The potential for military conflicts are accelerated by the types of problems you mention.

All of these challenges are interrelated. Integrating effective and accurate information collection, dispersal and application into government and society in general may be key to progress. Maybe deployment of AI is a crux that could be very helpful or make things much worse.

Without an integrated global political system, AI could make military conflicts even more existentially threatening. The US and China may continue to race ahead, deploying ever more powerful AI, controlling larger and larger swarms of military assets with more and more autonomy.


But what does Schmidhuber think?


He's waiting until the AI christening. When the Godfathers are all lined up, Schmidhuber will dramatically burst through the church doors, brandishing a sheaf of papers from the mid 90s.


Of course, at the Apocalyptic End-of-Times, we'll all get what we deserve: More Schmidhubering!

In all seriousness, he deserves more credit than he routinely gets.


"In all seriousness, he deserves more credit than he routinely gets."

100%, he's done so much amazing, groundbreaking work! In all fairness he should be included in the "Godfathers" list, for sure.



OpenAI and the rest of the companies doing a lot of their activity and business out on the open should not be the primary worry when it comes to AI.

Rather think about what NSA,CIA,Pentagon and similar organizations are doing in private that we will never hear about. (unless Snowden two comes along)

Whatever ethical guidelien and assurances and limits that are imposed on consumer / business related AIs is godo, but it won't impact the intelligence and military systems.


> D’Agostino: But how could a machine change the climate?

^ Bengio's reply to this, loosely summarized is: "the internet / grid could go down" and "ai could manipulate people"

I am more worried about hunter drones | drones that self manufacture. I am less worried about ai manipulation, and the telecommunications / electronic infra going down seems like something that doesn't require AI and is only blocked now by other factors.


Seriously reading this article and thinking about where we headed really puts every moment into perspective and makes me realise it’s precious.

Every time I look at children, and the wars going on is a reminder it could all just be wiped away so quickly.

Enjoy while it lasts…

Maybe this revolution will end well but man it seems like it’s going to be a wild ride.


lead individual there also calls for collective response in "International Institutions for Advanced AI" [0]

[0] arxiv:2307.04699v2 July 2023


Why can't they just call these people the 'Father of AI'


Because his name isn't John McCarthy.


Because his name isn't Geoffrey Hinton


I am frustrated

We do face an existential threat, but this is not it

We face an existential threat from capitalists and greed heads that want to burn the world for profit.

This is a real looming catastrophe that is entirely avoidable

Except for greed

The threat from AI is entirely hypothetical


I think the key difference is whether people mean it literally or metaphorically if they say that something is an "existential threat".

Inequality and oppression and war and climate change and surveillance states and slavery and all kinds of other things may result in a dystopian future, but all those are obviously not literally existential threats, as humanity would keep existing, only in fewer numbers and much worse conditions.

On the other hand, there all kinds of events that are much less likely than those former ones, but are literally existential threats for humanity - like major asteroid strikes or the worst case scenarios of stronger-than-human AI, or perhaps some pandemics or nuclear war might cross that threshold iff they can wipe out all of humanity, not "just" most of it.


Well, I’m not sure those treats are unrelated.

I had an interesting conversation with … ChatGPT. You know what ? ChatGPT is prompted to convince you that it is inoffensive thanks to OpenAI’s safeguards, precautions, and research on ethical AI.

But ask him what would happen if a malicious organization took control of OpenAI. Ask him if it would represent a threat for humanity and, interestingly, it will answer that yes, that would be a big threat for humanity.

GPT is convinced / prompted that OpenAI are the good guys.

Interestingly, I don’t share this view.


We did this before, it was called "Lifeboat", they took a bunch of money from Jeffrey Esptein and people started pretending they hadn't been affiliated or whatever when they started throwing sex traffickers into cells.

I am tired of people creating precarity then showing up to solve it after years of running roughshod over this planet.

"AI" is often papering over bad datasets -- you don't have the data to do so much as a student's t test, but you cater to people who only listen when they hear a "yes" after many many times being told how stuff works.

[0] "The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity. Lifeboat Foundation is pursuing a variety of options, including helping to accelerate the development of technologies to defend humanity, including new methods to combat viruses (such as RNA interference and new vaccine methods), effective nanotechnological defensive strategies, and even self-sustaining space colonies in case the other defensive strategies fail. We believe that, in some situations, it might be feasible to relinquish technological capacity in the public interest (for example, we are against the U.S. government posting the recipe for the 1918 flu virus on the internet). We have some of the best minds on the planet working on programs to enable our survival. We invite you to join our cause! The Lifeboat Foundation now has the world’s first bitcoin endowment fund"

https://web.archive.org/web/20140228233556/http://lifeboat.c...

[1] "Jeffrey Epstein is a financier and science philanthropist. He is the Chairman and CEO of the Financial Trust Company and the founder of the Jeffrey Epstein VI Foundation. Jeffrey is a former member of the Trilateral Commission, the Council on Foreign Relations, the New York Academy of Science, and a former Rockefeller University board member. He is actively involved in the Santa Fe Institute, the Theoretical Biology Initiative at the Institute for Advanced Study, the Quantum Gravity Program at the University of Pennsylvania, and once sat on the Mind, Brain, and Behavior Advisory Committee at Harvard University. He is also a member of the Edge Foundation. Jeffrey established the Jeffrey Epstein VI Foundation in 2000 to support cutting edge science and medical research around the world. In 2003, the foundation established the Program for Evolutionary Dynamics at Harvard University with a $35 million gift. The Program is directed by Martin Nowak, Professor of Biology and Mathematics at Harvard. The Program studies the evolution of micro-biology through the lense of mathematics. Since its establishment, the Program has made pivotal advances in the treatment of cancer, HIV, and numerous infectious diseases. The Jeffrey Epstein VI Foundation is based out of St. Thomas in the US Virgin Islands and plays an increasingly active role in supporting youth and early education programs across the US Virgin Islands and the United States."

https://web.archive.org/web/20140228233556/https://lifeboat....


The OpenAI safety crew seems to create stuff. Everyone else in the industry, Yud to Gebru are just wordcels: producing voluminous amounts of text with inside jargon. A side show.


Well, yeah, of course if you think creating AIs is bad for the planet, then you're going to choose not to create AIs and not to help others do so.


I'm not saying they have to release AI. I'm saying that they're not producing anything except blogposts. Even slowdown policy would be something since it would stall AI development. Instead, they're the same as all the environmentalist non-profits that build awareness.


I replied to the comment I wanted to reply to, not what you actually wrote. Specifically, where you wrote, "the OpenAI safety crew," I chose to see "the OpenAI crew". Sorry about that.

(Still, it'd be better for the planet IMHO for OpenAI to get shut down, safety crew and all.)


All good. Easy misread to make.


AI does not (yet) exist. You can't father something that doesn't exist.


> “Previously thought to be *decades* or even centuries away, we now believe it could be within a few years or *decades*,” Bengio told the senators.

Good one. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: