Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack and Rich Sutton partner to accelerate development of AGI (globenewswire.com)
66 points by ChrisArchitect 8 months ago | hide | past | favorite | 83 comments



More interesting is the announcement press conference and following Q&A: https://www.youtube.com/watch?v=aM7F5kuMjRA


That video URL doesn’t work anymore unfortunately

Here’s an updated link: https://youtube.com/watch?v=uTMtGT1RjlY


Ain't never gonna happen without becoming Bitcoin, wasting tons of energy and money.. but accomplishing an approximate simulation of biology inefficiently. Because you could potentially do something doesn't mean you should. There is no reason to do it but triumphal narcissism or VCs want to burn up their own money.


>tons of money

People want to eat.


I thought Carmack was focused on VR. He changed gears after leaving Meta, it seems


Carmack's interview with Lex Fridman was one of the things that got me excided about programming again after a deep burnout. But the part where he suddenly gets itritated at a mere mention of AI safety sent chills down my spine. I hope he changed his attitude since. And if not, I hope he fails.


I think his irritation is aimed at rank amateurs spouting nonsense because they saw Space Odyssey once and projecting all kinds of irrational fears ripped straight from popular science fiction on this space. People predicting doom scenarios with just about any technical revolution is nothing new. Luddites and fear mongering are about as old as technology is.

There are a lot of people out there with an opinion on AI right now. Carmack at least has done his homework and he's identified some solvable technical problems. And he's solving those. That's how technology moves forward. He's a bit on the spectrum, so yes, he doesn't have a lot of patience for people spouting nonsense in a hysterical hand wavy way.

Part of AI safety is the likes of China and a few other countries not getting there first and using it against us. They aren't going to slow down because some activists with ethical concerns in the Bay area. Unlike some of the irrational fears, that one is very real because they already have a lot of people working on this and they own a lot of chip factories.

So our choices are to wait for history to happen or make it happen.


The Luddites were replaced by the machines which is why they were protesting.

We(us humans) will not use A.I. to compete against each other. A.I. will be a competing top species.


AGI will be a tool of the owning classes to extract as much productivity and profit from the working class as possible.

In addition to your concerns.


Yes, this is the intermediate 5-15 years.


I agree with the latter part of your argument and wouldn't have posted my comment if that was what he said. (Although it doesn't mean we shouldn't be thinking about the risks of AGI, but rather both about how to mitigate them and how to get there before China.)

The thing is, however, that regardless of how much nonsense might be out there, well though-out arguments for AI risks do exist. Dismissing them together with the nonsense has to be either a conscious choice or utter ignorance. Either of which would be very concerning for someone aspiring to develop such a high-impact technology (and quite possibly being capable of doing it).


That said, how Americans imagine themselves a perfect AGI? How it should think? What it should believe? Should it be short term optimized? There was a scene in Robocop where he got installed a few hundred rules to obey and began to glitch.


AI safety is synonymous with user hostile design. I think John Carmack genuinely cares about users.


> I think John Carmack genuinely cares about users.

What makes you think that?

He worked at one of the most user-hostile companies on the planet for a decade, and only left because he wasn't happy with its operating efficiency.

Someone with that mentality might make technological progress on their own, but I'm not confident that they value morals and human safety over technology. His dismissive comments about AGI safety are certainly a concern.


I'm guessing that GP is referring to how in Lex's interview, he seemed to ground his decisions based on how it delivers value to the user. Iirc this came up more when they talked about his time at Meta.


I am as annoyed as anyone at the dumbing down of generative AI in the name of preventing offense at all costs. But what's at stake with AGI is not somebody's feelings - it's the existence of human race.


I think this whole narrative is completely and utterly overblown. There are so many things that endanger the existence of human beings that are far more tangible, like climate change, or dependence on non-renewable energy sources, water scarcity, loss of habitat and biodiversity. Where is the evidence that AGI is an existential threat? Why does science fiction of old play such a controlling role in the narrative behind AI?


Where is the evidence it isn't?

Imagine something that thinks 1000x faster than you, is thousands of times smarter than you, never sleeps, never eats, can self-improve, can self-replicate and it didn't like you, or someone deployed it against you and your ethnicity, family, whatever?

It would be devastating. You'd be fucked. We'd all be fucked.

Personally I don't think such a creation will ever be practical or work the way we think it will, but if we'd like to keep enjoying the world and value our autonomy, I think we should be very cautious when implementing it.


>Imagine something that thinks 1000x faster than you, is thousands of times smarter than you, never sleeps, never eats, can self-improve, can self-replicate and it didn't like you, or someone deployed it against you and your ethnicity, family, whatever?

Yeah that would be bad. We're also way way off from that point. A real threat that we actually face now is someone like Sam Altman getting congress to make it illegal for individuals to run models locally because then he can't charge us subscriptions for them or harvest information about our lives, thoughts, and feelings. Imagine an AI therapist (I've already done this with llama 65B a bit) with telemetry that lets commercial interests know all your specific deepest desires insecurities and fears so they can sell you stuff.

Skynet isn't the pressing issue, a Corpo-Techno-Stasi is.


Burden of proof I would say lies with the AI doomsday sayers. Great claims require great evidence. The above is wild speculation, not evidence.


You can't deploy it if you don't control it.


> it's the existence of human race

We will need AGI to ensure continued existence of the human race. Without it, our destruction is assured.


Why is safety bad? That's a werid claim...


Just because something is done in the name of "safety" doesn't make it unimpeachable good. In fact, it's often quite the opposite. Ask anyone who's had to get on a plane at a US airport in the last 20 years.


Do you mean the seatbelts in the car I drove to the airport at? Or the elevator inspections on the elevator I took from the 4th floor of the parking garage? Maybe the lack of smoking in the airport, or on the airplane? The regular aircraft inspections? Or the regulations on pilot hours? Laws on alcohol consumption for them? How about the emergency exits? Or the oxygen masks. Are these examples that I came up with off the top of my head quite the opposite of unimpeachable good?


No, they’re a bad faith reply, since surely you knew they were referring to the the airport security checkpoint and some of the asinine luggage restrictions imposed there.


Oxygen masks definitely aren't unimpeachable good. Passenger oxygen mask systems are virtually never needed but have caused multiple fires that destroyed planes and killed people. They are an expensive complication without which plane travel would be better - the plane would be cheaper to build, there'd be more room and weight available for passengers or luggage, and we'd all save time on safety briefings from not having to hear instructions on how to do a thing that you certainly won't need to do.

The pilots need oxygen masks as a backup while they quickly descend to a safer altitude. Unless you're flying over the Himalayas and can't drop that low, the passengers should be fine.

https://en.wikipedia.org/wiki/Emergency_oxygen_system#Incide...


It would seem they have protected against brain damage though?


You seem to have misread the parent comment as saying that anything done in the name of safety is bad. They did not say that.


In the context of AI, the more safety measures are put into a model, the worse it performs or so it gets claimed.

Safety here being say; a user asking for instructions on how to make a dirty bomb and the model responds with "Sorry, I can't do that ethically".


AI ethics (like making current AI refuse to do some things) and dealing with existential risks to humanity from future AI are quite different so we should probably not put them into the same category when talking about it


Exactly, one is about shielding humans from their own stupid intents whilst the other is shielding ourselves from AI's homocidal/genocidal intents (even if it's a second-order effect).


This is a common misconception of the meaning of model performance. AI safety effectively means adjusting the objective function to penalize some undesirable outcomes. Since the objective function is no longer absolute task performance model performance doesn't go down - it is simply being evaluated differently. The user may be unhappy - they can't build their dirty bomb - but the model creator isn't using user happiness as the only consideration. They are trying to maximise user happiness without straying outside whatever safety bounds they have set up.

In that sense it is mathematically equivalent to (say) applying an L2 regularization penalty to reduce the occurrance of higher-order terms when fitting a polynomial. Strictly it will produce a worse fit on your training data, however it is done because out of sample performance is important.


Is it just safety? We also need to align them to be useful. So I'm not sure safety and usability are mutually exclusive? A safe model seems like a useful model, a model that gives you dangerous information seems, well dangerous and less useful ?


Does Google or other search engines block sites that have instructions on how to make a dirty bomb?


I googled that exact phrase (and put on the kettle for the visit I'll soon get). The first page was all government related resources on how to deal with terrorist attacks.

If not outright blocked, instructions do seem to be weighted down.


That's not what people mean by AI safety - they're referring to the dangers of uncontrollable or runaway AI. Particularly AGI.


it's a Motte-and-bailey situation [1]. In theory AI-Safety is about X-Risks. In practice it's about making AI compliant, non-racist, non-aggressiv etc.

[1] https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy


scott alexander has a nice post about the intersection of these two types of ai safety https://www.astralcodexten.com/p/perhaps-it-is-a-bad-thing-t...


The standard e/acc trope is that existential risk is obviously sci-fi nonsense, and so anything that slows down progress is just doing harm. (Usually no attempt is made to engage with the actual arguments for potential risk.)

Given the self-evidence of no existential risk, there is then an objection to “dumbing down” where model performance suffers due to RLHF (the “alignment tax” is a real thing), and often but not always this includes an objection to wokeness or perceived unwillingness to speak “truths” or left-wing bias being imposed on models.


Which, if you think carefully enough, means that he's neither interested nor capable of (creating) AGI.

He, like many engineers, think intelligence is a kind of engineering; so you just need to build something that works.

Intelligence is all about coping under failure. It is a lot more like, 'acting safely when you dont have the answers you need' than 'answering questions correctly'.

That latter impulse resigns the whole field to cheap mimicry. These guys arent even starting from the right point, nor could they, being engineers.


> Intelligence is all about coping under failure. It is a lot more like, 'acting safely when you dont have the answers you need' than 'answering questions correctly'.

This is the first time I see this definition of intelligence.

My first impression is that this is a relevant aspect of intelligence, but very far from "all about". (other aspects being things like "abstract thought", world knowledge, agency, ...)

> That latter impulse resigns the whole field to cheap mimicry

AIs are inherently non-human intelligence. (I like the shoggoth-comparison [1]). AI companies go to great length to make them behave more human-like. If you look at earlier AIs (e.g. when they play games against humans), you will find that human experts (e.g. the people they play against) they will often note, that the AI behaves very unhuman and un-understandable in certain aspects (e.g. they make a move that seems utter non-sense at first, but will be very important 40 moves latter in the game). (I should note, that the opposite is true as well: sometimes the AI surprises the expert by exhibiting very human-like behavior)

[1] https://www.astralcodexten.com/p/janus-simulators


I wouldnt count AI as intelligent. I have a definition grounded in empirical observations of actually intelligent systems -- start there, and generalise very carefully.

The reason intelligence has evolved in animals is because we do not have the answers. All of modern AI is focused on how to extract behaviour from answers -- rather than how to cope when they arent present.

Intelligence is the solution to the problem that evolution cannot adapt life fast enough. Evolution is the kind of 'intelligence' AI has: delete the failures; know the answers.

The intelligence of interest, in animals, is live adaption to one's environment. It's ecological rationality brought about by sensory-motor adaption.

This is why we find the NN-type solutions to problems deeply unsatisfying. We arent missing something: there is no "there" there. It's just a compression of the answer space.

This is really disappointing: the intelligent solution is the interesting one! Modern AI finds ways of solving problems without intelligence.

Emphasis on the word 'artifical'


>He, like many engineers, think intelligence is a kind of engineering; so you just need to build something that works.

Intelligence is what intelligence is. You don't know what it is even if you dearly believe you do. Nobody really does. We construct ways to probe it, at least to probe the aspects of it we find useful.

>That latter impulse resigns the whole field to cheap mimicry.

"Mimicry" is an assertion that is essentially meaningless. You mean to tell me the plane is fake flying ? Lol Okay. The bee might just as well accuse a bird of mimicry. This is a very rubbish line of argument an astonishing number of people seem to have trapped themselves in. The idea that internal processes matter anything more than a means to an end.


> You don't know what it is even if you dearly believe you do

Right, but you know enough -- of course! -- to say that I do not know.

I've written the first draft of a book on animal intelligence. I doubt neither you nor Carmack have even thought that a study of actually intelligent systems (animals) should even enter the conversation.

No no, of course, it's just going to be a modulo trick.


>Right, but you know enough -- of course! -- to say that I do not know.

I simply know that sincerely believing something to be so has no actual bearing on truth. I know that results are what matter because that is what we employ with each other. And because that's the only thing Evolution "cared" about.

>I've written the first draft of a book on animal intelligence.

That's good for you.

>I doubt neither you nor Carmack have even thought that a study of actually intelligent systems (animals) should even enter the conversation.

I do know that this study is not the be all end all. Why would I be concerned with following one example of biology to a T when even Nature itself does not (Flight and the bee and the bird amongst others).

>No no, of course, it's just going to be a modulo trick.

There is no trick. There is only what flies and what does not. Trick flying is not a distinction that exists. Reality does not play tricks.


>He, like many engineers, think intelligence is a kind of engineering; so you just need to build something that works.

It is quite plainly possible to produce a system at least as intelligent as a human literally through random chance, based solely on what happens to work. That is, in fact, the sole example we have for how to make anything like AGI.


Human-level intelligence has never emerged from random chance. We humans evolved, and that's via a Darwinistic process of natural selection - which is about as far from random chance as one could get.

Mutations are random - they provide the raw material for evolutionary selection. Evolution itself is absolutely not random chance.

Unless you were referring to something else?


Artificial general intelligence seemed way cooler before the current LLM wave. Now it just seems dangerous. What's the point? What's the goal here? Edit: genuinely curious about examples of applications, which are absent from the replies so far.


For a bunch of computer scientists to keep their curious minds entertained and another bunch of already rich individuals to get richer.


Whoa, so we discard all his previous achievements because you're jealous you don't have any money?


Weird jump to make from my comment. Carmack is obviously in the former group.


Alright, I've read too fast and the thread turned down my mood. Sorry!


Interested about your logic, what did you like about pre-LLM AGI? The "maximize utility function at any cost" feature? The single-minded focus on beating people in games?

It's quite terrifying how, as we've chosen an apparently very easy path to bake our preferences and quirks into intelligent systems, people became very "responsible" and concerned for survival of human race, parroting alarmist rhetoric that precedes not only LLMs, but even RL successes of early Deepmind and just cites vague shower thoughts of Bostrom and such non-technical ilk. Say what you want about LLMs but there's zero credible reason to perceive them as a more risky approach!


LLMs prove a very very different path from what most humans for decades assumed artificial intelligence would manifest as.

They're not these rigid, logic/rule bound systems that struggle with human emotions. By all accounts, GPT-4 is as emotionally competent as it is on anything else.

I suppose there's something unsettling about building Super Intelligence in humanity's image.


It’s entirely rule-bound. All it does is draw tokens from a statistical distribution. What people mostly don’t like to contemplate is that they too are entirely rule-bound: Brains do nothing but follow the laws of physics, proceeding from one state to the next on the basis of these rules alone.


To reduce costs long term and make more money at the expense of other people.


Since when making math operations on a bunch of text is dangerous????


Perhaps because 'the pen is mightier than the sword'.

It's the same reason people like Jeff Bezos spent 250 mill a decade ago on the Washington Post.

And Musk spent 44 billion on Twitter.

Words matter - words can change the world.


windows is adding chat gpt that allows it to run commands on your computer. Things like "open application X" or "maximise this window". They still need the user to press a button for each action it operates though.

Sure, harmless now, but imagine you let one of these LLMs actually use your browser, logged in as you, and it does something that you didn't intend it to do. We are getting to that point fast where you can ask an LLM to check your emails and answer them for you, order food from uber eats or use some internal company or government system that can control a huge number of different variables.

Funnily enough it seems human error (through bad or misguided LLMs prompts) is going to become much more common as LLMs do more things for us. You would kind expect the opposite from automated computer systems, yet here we are.


Make numbers go up. Like always...


I take it you don't have any friend or relative that are dying because of an illness, otherwise you would have found a goal for AGI all by yourself.


I don't see the connection. Can you elaborate?


In the short term, AGI means that everybody can get a personal doctor. In the longer term, AGI will help medical research.



And more discussion earlier ahead of the announcement over here:

https://news.ycombinator.com/item?id=37651548


We kind of have a problem here when you all just ignore the sole comment on a post directing you to the discussion and continue upvoting. C'mon.


You can just email the mods to have the dupes merged


Can we take a break from this already - why are a handful of computer scientists and VCs suddenly setting the course for the entire human race? Didn’t we learn anything from social media?

None of the AI features I’ve encountered being added to applications are opt-in.


Those who step up get to set the course of humanity. There is a lack of high quality political leadership and Moloch is the primary deity in the temples of capitalism.

You could argue something similar happened with cars, which continue to shape American culture and development.

I think AI tools have the chance to help make the world a much better place, but it all depends on how it's handled and implemented.

Personally, I don't trust any single group or person with the full power of AI, so I'm hoping open source models regulate any potential power imbalances.


[flagged]


There are legitimate and serious concerns about AGI as a concept, but I agree with you. Even earlier in the year, each time a new GPT model came out, people on here predicted the end of the world with increasingly manic fervour. Not done with HN, but I'm going to be avoiding AI links like the plague.


> Even earlier in the year, each time a new GPT model came out, people on here predicted the end of the world with increasingly manic fervour.

It's definitely not the majority opinion though, either by vote count or by number of comments.

Like, I get the annoyance, but... this is part of the problem, right? People are deciding that other people who worry about safety are doomers and annoying and it's not worth engaging with their object-level arguments. Discussions of AI safety get systematically shut off by saying they're just doomers, or saying they've watched too much Terminator, or saying they're neckbeard idiots who don't realize the real threat is capitalism and therefore open-source AI couldn't possibly be a safety concern...

There's a debate that need to be had, every time people mention the safety concerns they get told off and dismissed, and then they're being told that they're doomer and everybody is sick of this debate already, which basically means "stop bringing these threats up".


Conceptually I can get behind, but as it's been said I join the belief that Carmack is smart enough to know we're probably not even on the right track to create a true form of self awareness that could end the world.


[flagged]


Carmack has been working on AI for a while. And I don’t think he cares about his reputation much.

Two decades ago he gave space travel a shot with Armadillo Aerospace. It failed, but that didn’t hurt his reputation.


Why? He's a person who likes hard problems and has a chance to work on them.

This seems to me in line with, say armadillo aerospace. I doubt they'll build AGI but we may still get some value out of this, like SpaceX got the "cheap fast iteration" idea from Armadillo.


> like SpaceX got the "cheap fast iteration" idea from Armadillo.

Any sources?


Why do you think he is trying to destroy it. And also why do you think he cares about it?

As Orson Welles says: Ignorance is Freedom https://www.youtube.com/watch?v=iiHeNyY629A


Where is this comment coming from? Can anyone give some context?


There are some people out there in filter bubbles where AI has become something that is trendy to despise. Whenever the topic comes up, you’ll see a few people reflexively hate anyone who has anything to do with AI now. In the absence of any other context, I would assume that’s what this is.


its telling that you think hating it on it is what's trendy


Could you try reading my comment again please? I didn’t say that. You can’t just randomly skip over important parts of sentences.


Shouldn't he first finish the VR headset?


Apple is going to do that.

And I think Apple is going to finish the AGI, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: