Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack's new AGI company, Keen Technologies, has raised a $20M round (twitter.com/id_aa_carmack)
461 points by jasondavies on Aug 19, 2022 | hide | past | favorite | 619 comments



I'm very optimistic for near-term AGI (10 years or less). Even just a few years ago most in the field would have said that it's an "unknown unknown", we didn't have the theory or the models, there was no path forward and so it was impossible to predict.

Now we have a fairly concrete idea of what a potential AGI might look like - an RL agent that uses a large transformer.

The issue is that unlike supervised training you need to simulate the environment along with the agent, so this requires a magnitude more compute compared to LLMs. That's why I think it will still be large corporate labs that will make the most progress in this field.


> we didn't have the theory or the models, there was no path forward and so it was impossible to predict. Now we have a fairly concrete idea of what a potential AGI might look like - an RL agent that uses a large transformer.

Who is we exactly? As someone working in AI research I know no one that would agree with this statement, so im quite puzzled by that statement.


> As someone working in AI research

Being in the tail end of my PhD, I want to second this sentiment. I'm not even bullish on AGI (more specifically HLI) in 50 years. Scale will only take you so far and we have to move past frequentism. Hell, causality research still isn't that popular but is quite important for intelligence.

I think people (especially tech enthusiasts) are getting caught in the gullibility gap.


As an AI researcher myself I can definitely see why people used to have these sentiments, but I also have to point out how the big papers of the last ~12 months or so have changed the landscape. Especially multi-modal models like Gato. To my surprise, a lot of people (even inside the community) still tend to put human intelligence on some sort of pedestal, but I believe once a system (or host of systems) achieves a sufficient level of multi-modality, these same people will no longer be able to tell human and machine intelligence apart - and in the end that is the only thing that counts, since we lack a rigorous definition of "understanding" or consciousness as a whole. Transformers are certainly going to become even more general purpose in the future and I'm sure we haven't even seen close to what's possible with RL. Regarding the AGI timescale, I'm a bit more bullish. Not Elon-type "possible end-all singularity in 5 years" but more Carmack-like "learning disabled toddler running on 10k+ GPUs by the end of the decade." Whether RL+transformers will be able to go all the way there is impossible to tell right now, but if I was starting an AGI company today that's definitely the path I would pursue.


I'm not sure Gato actually demonstrates intelligence. It definitely does demonstrate the ability to perform a wide variety of tasks, but I don't think that is actually intelligence. Gato has yet to demonstrate that it understands things. I do think it is easy to fall prone to the gullibility gap (and also take too strong of an opposing view, which I may be victim to).


How are you defining "understands things"? Or more precisely, what are the observable consequences of understanding things that you are waiting to see?


"Understands things" means a lot. Mostly I'm referring the causality and generalization of inference though.


So, what would be the observable, empirical consequences of the kind of causal understanding and generalization of inference you're describing?


The common current example? A text prompt of "A horse riding an astronaut" without prompt engineering. Though I don't think the successful production of this image will demonstrate intelligence/causal understanding either (but it is a good counter example).

Causal understanding is going to be a bit difficult to prove tbh.


> The common current example? A text prompt of "A horse riding an astronaut" without prompt engineering. Though I don't think the successful production of this image will demonstrate intelligence/causal understanding either (but it is a good counter example).

I'm not sure why you think this falsifies intelligence. There are plenty of puzzles and illusions that trick humans. The mere presence of conceptual error is no disproof of intelligence, any more than the fact that most humans get the Monty Hall problem wrong is.


Your argument is that there are adversarial cases? Sure... But that's not what I'm even arguing here. There's more nuance to the problem here that you lack an understanding of. I do suggest diving deep into the research to understand this rather than arrogantly make comments like this. If you have questions, that's a different thing. But this is inappropriate and demonstrates a lack of intimate understanding of the field.


I didn't make an argument that there are adversarial cases, you did. You brought up an adversarial example, and said the existence of that example proves these algorithms are not generally intelligent. If that follows, it follows that the existence of adversarial examples for humans proves the same thing about us.

And in general, if you're going to be condescending, you should actually make the counter argument. You might make fewer reasoning errors that way.


Counter-argument: DALL-E is smart enough to understand that an astronaut riding a horse makes more sense than a horse riding an astronaut, and therefore assumes that you meant "a horse-riding astronaut" unless you go out of your way to specify that you definitely do, in fact, want to see a horse riding an astronaut.


Because intelligence is more than frequentism. Being able to predict that a dice lands on a given side with probability 1/6 is not a demonstration of intelligence. It feels a bit naive to even suggest this and I suspect that you aren't a ML researcher (and if I'm right, maybe don't have as much arrogance because you don't have as much domain knowledge).


ML layman, asshole expert here. You’re a jerk dude. Stop condescending to your peers.


You will have to do better than make rhetorical arguments.


Gato is definitely cool, but it's worth remembering it was based on supervised learning with data from a bunch of tasks, so data availability is a strong bottleneck with that approach. Still, I'd agree the past year or two have made me more optimistic about HLAI (if not scary AGI).


Yeah, I'm definitely not saying this is already the ultimate solution. But if you look at their scaling experiments and extrapolate the results, it roughly shows how a model with number of parameters on the same order of magnitude as the human brain could more or less achieve overall human level performance. Of course that doesn't mean the architecture has to extrapolate, but it's definitely a path worth walking unless someone proves it's a dead end. In that case I'd replace "end of decade" with 10 to 20 years in my above statement.


Except it is pretty naive to extrapolate like that. A high order polynomial looks flat if you zoom in enough. You can't extrapolate far beyond what you have empirical data for without potentially running into huge problems.


How do you know what the function looks like? Unless someone shows that this scaling behaviour breaks down, it is definitely worth pursuing, since the possible benefits are obvious. Everyone who just says it will break down eventually is almost certainly right, but this is also a completely trivial and worthless prediction by itself. The important distinction is that noone knows where the breakdown happens. And from what we can tell today, it's not even close.


I think your viewpoint is the more naive one and I think you have the burden of proof to argue why local trends are representative of global trends. This is because the non-local argument accepts a wider range of possibilities and the local argument is the the more specific one. Think of it this way, I'm saying that every function looks linear given the right perspective and you're arguing that a particular function _is_ linear. I'm arguing that we don't know that. The burden of proof would be on you to prove that this function were linear beyond what we have data for. (Path towards HLI doesn't have to be linear, exponential, nor logarithmic but you do have to make a pretty strong argument to convince others that the local trend is generalizable).


You are arguing completely beside the original point. Gato was just meant as one surprising example out of many to show how much more the architectures we already have today are capable of. Whether you believe how the scaling curve for this very particular model will work in detail is totally irrelevant to me or anyone else and I'm not trying to convince you of anything. I was just pointing out that there is a clear direction for future research and if you think people like Carmack are on the wrong path - so be it. You don't have to pursue it as well. But I don't care and he certainly doesn't either.


I understood your argument, in the context of the discussion we've been having, that Gato was a good example of us being on a good path towards building intelligent machines. I do not think Gato demonstrates that.


You're missing the point.

The point isn't that we can or cannot extrapolate. The point is that it is worth investigating to see if the trend line holds up.


I think it is pretty disingenuous to read my comments as not wanting to investigate. I definitely do. I did say I'm an ML researcher and I do want to see HLI in my lifetime. But I also think we should invest in other directions as well.


If you review the history of AGI research going back to the 1960s, there have been multiple breakthroughs which convinced some researchers that success was close. Yet those all turned out to be mirages. You're probably being overly optimistic, and the Gato approach will eventually run into a wall.


Gato is just one tiny glimpse out of many, and the combined research effort both from academia and industry and the level of success has never even come close throughout history to the things we are seeing right now.


> I think people (especially tech enthusiasts) are getting caught in the gullibility gap.

I think it's that coupled with inserting their own often fanciful projections to fill the void and gaps between the lack of understanding how the mechanics works. Here is a good example of influencers laboring under this narrative [0] and a glimpse as to why it's so effective and pervasive.

I'm conflicted, because I think this is precisely what keeps many Corps and VCs throwing money into this tech thinking it's just right around the corner, as, I'm also studying AI and ML, but I also realize how implausible AGI is in my lifetime given where we are and how narrow parameters these algos function in to yield some degree of accuracy and more specifically how many errors there are to do so as it's just part of the process--learning how gradient descent works was pretty eye opening.

The Marketing has made many people not just fall into a gullibility gap as they make it seem like some sort of black magic, when it's really just statistics fed with immense amounts of specific data, but influncers make it seem like it's capable of ending the World (Elon) when in reality people should be more worried about big tech's capability at aggregating data points from info people willfully post online to exclude you or cluster you into a certain group because of your use of free apps and selling it the highest bidder which are tied to your real ID. Or the algos with weights (biases) that are used and bundled into outsourced/automated HR solutions.

0: https://youtu.be/jdIyNMkusLE?t=1268


This is a classic case of scope creep caused by underspecified scope (evidenced by how little HLI improves the non-definition of AGI). If humans accurately exhibited causal reasoning in any but the most mind-numbingly obvious conditions we would be living in a very different world!


> Being in the tail end of my PhD

That doesn't really qualify you to make these anti-predictions any more than tens of thousands of other people working in this field. There's also this phenomenon where experts close to a problem are worse at predicting the future because they are so intimately familiar with the problems that they can't see the forest for the trees.

To me there is one major signal that AGI is much closer than 50 years away and that is the breakthrough with Transformers. Now we have a general purpose architecture for predicting the future that works really well, which is a key component in the brain (cortical columns). Clearly we're still at the early stages, but 50 years is a loooong time. 50 years ago we didn't even have the Altair 8800.


Yes we are going back to symbolism, , ontologies and causality. Which is a good and healthy sign. It was abandonned the same way neural networks were at some point. Now that it becomes practical at scale, we can combine them all.


I don't think folks in academia are anywhere near the cutting edge on AGI. It's not a field that is advanced by things like particle accelerators as you might find in physics, that some unis are famous for.

Look at the work result of folks like engineers at Google - state of the art in AI and driving most of the trends in AGI. Academic opinions are shot down quite quickly there, PhD hiring is mostly frowned on, even in teams working on AGI related tech.

It might be expected that PhD experience provides surprising little insight here unfortunately. Academic smugness is not something that should be offered as part of an opinion- just like many computer science related concepts, progress moves so quickly in these topics that by the time academics reach and start to iterate on concepts, companies have already moved very far beyond.

This isn't intended to be condescending to academics, just real life experience I've had working closely with both crowds and seeing this firsthand. It's a very palpable thing in industry, an Academic gives an opinion on AGI and engineers roll their eyes - since Academic perspectives are currently so useless or out of date for any current, meaningful progress.


I know it maybe too much to ask but as a noob to this field I’d love to hear more about why AGI is not feasible even in 50 years.


I've seen a few posts here asking why it isn't possible. Research doesn't work like that, you can't just imagine something is possible and demand reasons why it isn't. How is it possible?

It feels a bit like a lot of these fringe science things (electromagnetic fields causing harm etc) where people's argument is that nobody's shown it isn't true. Well if it's true, propose a mechanism and study it. You're not really doing research if you just make up a conclusion and use the fact that it can't be definitively refuted as evidence for it


Yeah this is a good point. I've been taking the bearish position of HLI here but that doesn't mean I don't think HLI is impossible. I very much do think it is possible (I'm not sure there are many researchers who think it is impossible) but just how close we are. But we don't even have good definitions, so really this is just laymen being privy to expert arguments. I also think there's this naive belief that complex topics can be distilled to laymen terms while maintaining accuracy. That's definitely not true here, as even our "in-group" doesn't have a consensus. But that's cutting edge research for you.


In fairness to the parent, research doesn't work that why, but they're not conducting research.

"What are the remaining unknowns that would need to be solved to do X and why are they difficult?" is valid question to ask experts when trying to further your own understanding.


Sure. It is rather complex to be honest but I'll do my best to give a high level abstract. I should first off explain that there's no clear definition of intelligence. You might see some of my other comments use the term HLI, which is Human Level Intelligence. It is a term gaining in popularity due to this difficult to define definition and allows for more a porn like definition -- I know it when I see it.

Philosophers, psychologists, neuroscientists, and many others have been contemplating the definition of intelligence for centuries and we're still not quite there. But we do have some good ideas. One of the important aspects is the ability to draw causal relationships. This is something you'll find state of the art (SOTA) text to image (T2I) generators fail at. The problem is that these models learn through a frequency based relationship with the data. While we can often get the desired outcome through significant prompt engineering it still demonstrates a lack of the algorithm really understanding language. The famous example is "horse riding an astronaut." Without prompt engineering the T2I models will always produce an astronaut riding a horse. This is likely due to the frequency of that relationship (people are always on horses and not the other way around).

Of course, there are others in the field that argue that the algorithms can learn this through enough data, model scale, and the right architecture. But this also isn't how humans learn and so we do know that if this does create intelligence it might look very different ("If a lion could talk I wouldn't be able to understand it"). Obviously I'm in the other camp, but let's recognize that this camp does exist (we have no facts, so there's opinions). I think Chomsky makes a good point: our large language models have not taught us anything meaningful about language. Gary Marcus might be another person to look into as he writes a lot on his blogs and there's a podcast I like called Machine Learning Street Talk (they interview both Marcus and Chomsky but also interview people that have the opposite opinion).

So another thing, is that machines haven't shown really good generalization, abstraction, or really inference. You can tell a human a pretty fictitious scenario and we're really good at producing those images. Think of all the fake scenarios you go through while in the shower or waiting to fall asleep. This is an important aspect of human intelligence as it helps train us. But also, we have a lot of instincts and a lot of knowledge embedded in our genetics (as does every other animal). So we're learning a lot more than what we can view within our limited lifetimes (of course our intelligence also has problems). Maybe a good example of abstraction is "what is a chair." There's a whole discussion about embodiment (having a body) and how language like this may not be able to be captured in the abstract manner without a body since a chair is essentially anything that you can sit on.

I feel like I've ranted and may have written a mess of a lot of different things haha. But I hope this helps. I'll try to answer more and I'm sure others will lay out their opinions. Hopefully we can see a good breadth of opinions and discussion here. Honestly, no one knows the answers.


> You can tell a human a pretty fictitious scenario and we're really good at producing those images.

Not all us do that.

A chair is both what I see as well as what I perceive. And it is neither. But, I never build a visual model of it when my eyes aren’t seeing it.


Clearly, but it is common enough that we joke about how it is impossible to not think about a pink elephant simply because someone said it. People have better visualization than others but the vast majority of people can conjure up the concept of this non-existent elephant. But clearly intelligence is beyond visualization as we wouldn't say a person born blind is unintelligent simply because they cannot see. Still, the pink elephant is a fairly good analogy. But analogies aren't perfect and that's okay. Just take it in good faith.


> It is a term gaining in popularity due to this difficult to define definition and allows for more a porn like definition

> Philosophers, psychologists, neuroscientists, and many others have been contemplating the definition of intelligence for centuries and we're still not -quite there

Given the decade-old, seminal, widely cited papers by Legg (co-founder of Deepmind) & Hutter (long-term professor of ANU with multiple awards, currently researcher at Deepmind) titled "Universal Intelligence: A Definition of Machine Intelligence"[1] and "A Collection of Definitions of Intelligence"[2] and the whole rigorous mathematical theory of universal bayesian reinforcement learning aka "AIXI"[3] developed during the last decades which you fail to mention, this soft language of "not quite there" looks evasive, bordering on disingenious even.

Citing Chomsky, with his theorems which were interesting at their own time yet quickly faded into irrelevance once the modern assumptions of learning theory were developed, and Marcus whose simplistic prompts are a laughing stock among degreed and amateur practicing researchers alike brings little of value to the already biased discussion.

> Of course, there are others in the field that argue that the algorithms can learn this through enough data

Such as Rich Sutton, one of the fathers of Reinforcement Learning as an academic discipline, whose recent essay "Bitter Lesson"[4] took the field by word of mouth, if nothing else. It takes a certain amount of honesty and resolution to acknowledge that one's competitive pursuit of knowledge, status and fame via participating in academic "publish or perish" culture does not bring humanity forward along a chosen axis, compared to more mundane engineering-centric research, many people commend him for that.

> So another thing, is that machines haven't shown really good generalization, abstraction, or really inference.

To play a devil's advocate: "Scaling is all you need" and "Attention is all you need" are devilishly simple hypotheses which seem to work again and again when we reluctantly dole out more compute to push them further. With a combination of BIG-bench[5] and an assortment of RL environments as the latest measuring stick, why should we, as a society, avoid doling out enough compute to take it by storm in favor of a certain hypothesis implicit to your writing, which could as well be named "Diverse academic AI funding is all you need"? Especially when variations of this hypothesis have failed us for decades starting with the later part of the XX century, at least if we consider AI research.

> but let's recognize that this camp does exist (we have no facts, so there's opinions).

With all due respect, the continuing lavish (by global standards) funding of your career and careers of many other unlucky researchers should not be a priority when such massive boon to all of humanity as a practical near-HLI/AGI is at stake. "Think of the children", "think of the ill", think of everybody who is not yourself - I say as a taxpayer, hoping for AGI checkpoint to be produced by academics and not the megacorporations, and owned by the people, and not the select few.

Which - the proprietary nature of FAANG-developed AI - might be a real problem we are going to face this decade which is going to overshadow any career anxiety you may have experienced so far.

1. https://arxiv.org/abs/0712.3329

2. https://arxiv.org/abs/0706.3639

3. https://arxiv.org/abs/cs/0004001

4. http://www.incompleteideas.net/IncIdeas/BitterLesson.html

5. https://github.com/google/BIG-bench


They didn't say it's not possible within 50 years, they said they weren't expecting it to happen in that timeframe, and that's a very different sort of claim.


Scale isn’t the problem. Very simple organisms display general intelligence. General intelligence is rampant in the insect world. It’s something along the lines of a repl vs compiled language paradigm shift.


> gullibility gap

Search turns up very little except a book by a Christian author. Could you define what you mean by gullibility gap?


It's coined in context.


Really I'd say it is coined by Gary Marcus. At least that's who I first heard discuss it.


I don't see it. Did they mean "uncanny valley"?


Kinda. The gullibility gap more refers to the fact that it is really easy to believe the machines are smarter than they are because they can perform tasks that nothing but humans can do. With the exception of maybe parrots there's no other creatures that can talk so we associate this with intelligence. But I think it is rather naive to believe that the ability to string together reasonable sounding text is actually intelligence. There has yet to be a demonstration of these algorithms actually understanding things.


What sort of demonstration do you have in mind?


> Hell, causality research still isn't that popular but is quite important for intelligence.

what are the esteemed epistemological frameworks in the AGI space? from the outside looking in, i see CNNs and lots of correlation-based approaches: it looks like the field settled on consequentialism. is that not the case?


> Who is we exactly?

When I read these kind of threads, I believe it's "enthusiast" laypeople who follow the headlines but don't actually have a deep understanding of the tech.

Of course there are the promoters who are raising money and need to frame each advance in the most optimistic light. I don't see anything wrong with that, it just means that there will be a group of techie but not research literate folks who almost necessarily become the promoters and talk about how such and such headline means that a big advance is right around the corner. That is what I believe we're seeing here.


Can someone please explain like we are fifteen why AGI is impossible, at least right now? Or if not AGI, then something similar to a cat/etc mind?

As far as I am imagining it, current models are pipelines of various trained networks (and more traditional filters in the mix) that operate like request-reply. Why can’t you just connect few different pipelines in a loop/graph and make an autonomous self-feeding entity? By different I mean not looping gpt to itself, but different like object detection from camera vs emotion from a picture based on some training data. Is it because you don’t have data what is scary or not, or for a completely different reason?


> Can someone please explain like we are fifteen why AGI is impossible, at least right now

Probably not. I did attempt to give a high level explanation in another comment but I think there is this naive belief that complex problems can be distilled into terms that laymen can understand. This is such a complex problem that is so ill-defined that experts argue. I'm not sure there's really a good "explain it like I'm an undergrad who's done ML courses" explanation that can be concisely summed up in a HN comment.


>but I think there is this naive belief that complex problems can be distilled into terms that laymen can understand

Naive people like Richard Feynman, who said if you can't explain an idea to an 8 year old, you don't understand it? Can you tell us why you think Nobel Prize winner Richard Feynman is naive?


He isn't, you are. Let's look at some other Feynman quotes which we can actually attribute (I can't find a real source for your claim though I've also heard it attributed to Einstein).

> Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize.

> I can't explain [magnetic] attraction in terms of anything else that's familiar to you. For example, if I said the magnets attract like as if they were connected by rubber bands, I would be cheating you. Because they're not connected by rubber bands … and if you were curious enough, you'd ask me why rubber bands tend to pull back together again, and I would end up explaining that in terms of electrical forces, which are the very things that I'm trying to use the rubber bands to explain, so I have cheated very badly, you see."

> we have this terrible struggle to try to explain things to people who have no reason to want to know. But if they want to defend their own point of view, they will have to learn what yours is a little bit. So I suggest, maybe correctly and perhaps wrongly, that we are too polite.

My best guess is that this misattribution comes from quote ABOUT Feynman and a misunderstanding at what is being conveyed.

> Once I asked [Feynman] to explain to me, so that I can understand it, why spin-1/2 particles obey Fermi-Dirac statistics. Gauging his audience perfectly, he said, "I'll prepare a freshman lecture on it." But a few days later he came to me and said: "You know, I couldn't do it. I couldn't reduce it to the freshman level. That means we really don't understand it." - David L. Goodstein

Which doesn't mean what you're using your "quote" to mean. As I stated before, we (the scientific community), don't even know what intelligence is. We definitely don't understand it so I'm not sure how you'd expect us to explain it to an 8 year old. Lots of things can't be explained to an 8 year old. Good luck teaching them Lie Algebras or Gauge Theory. You can have an excellent understanding of these advanced topics and the only way that 8 year old is going to understand it is if they are a genius prodigy and well beyond a layman. This quote is just illogical and only used by people who think the world is far simpler than it is and are too lazy to actually pursue its beauty. They only want to sound smart and they only will to those dumber than them.

Stop saying this and get off your high horse and hit the books instead.

Maybe Feynman was right, we're being too polite. There are a bunch of people in this thread that are not arguing in good faith and pretending to be smarter than people that are experts and performing mental gymnastics to prove that (you are one but not alone). If an expert is telling you that you are using words wrong, then you probably are. Don't just assume you're smarter than an expert. You don't have the experience to have that ego.

https://en.wikiquote.org/wiki/Richard_Feynman

https://www.sciencealert.com/watch-richard-feynman-on-why-he....


You will be hard pressed to have an interesting discussion about AGI because the way the term is defined makes it uninteresting. It’s like trying to have a discussion about aviation by asking how close we are to fly exactly like birds. It’s not really relevant to our ability to design good planes.

Then any discussions will be highly handicapped by the fact most still view human intelligence as something special. The field is still very much awaiting its urea synthesis moment.


I don’t view it as such, and am open to anything experts could suggest, much further than cat vs human difference, and not even on the line where these two reside, but a sci-fi level different. The fact that human intelligence is seemed as special bothers me too.


The gorilla has a brain 1/3 the size of a human brain with a very similar evolutionary history.

The sperm whale has a brain that is several times larger than ours.

What do you do differently in your AGI design to get a human, gorilla, or whale brain?


I'm not sure the point of this question - a brain size larger than a human's doesn't matter that much if the extra size isn't going to the right places. From what I can tell it's the size of the brain dedicated to higher cognition (cerebral cortex) and the "power" of the neurons it contains (humans spend more relatively more energy on their brain than an animal's brain of the same size). The answer to your question (how to make AGI dumber than humans) simply seems to be have fewer neurons in total, fewer neurons doing higher cognition, and have those neurons be less powerful.


Cool, make it happen. Don’t know how? Neither does anyone else. That’s why it’s not happening right now.


I spent a month studying AGI on a hobby level and the answer, from what I can tell, is that it isn't possible right now because we can't even model how human level intelligence works. There are also several ways to approach developing AGI and we don't even really have a good idea on which approach is best. I am not convinced current human level intelligence is enough to ever figure out this model, but I believe future genetically modified humans may have a much better chance.


No one can explain why AGI is impossible because you can't prove a negative. But so far there is still no clear path to a solution. We can't be confident that we're on the right track towards human-level intelligence until we can build something roughly equivalent to, let's say, a reptile brain (or pick some other similar target if you prefer).

If you have an idea for a technical approach then go ahead and build it. See what happens.


Nando de Freitas Research Director at @DeepMind. CIFAR. Previously Prof @UBC & @UniofOxford. made a lot of headlines:

https://twitter.com/NandoDF/status/1525397036325019649

Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N

Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n

https://twitter.com/NandoDF/status/1525397036325019649


And I agree with Nando’s view, but he is not saying we can just take a transformer model, scale it 10T parameters and get AGI. He is only saying that trying to reach AGI with a « smarter » algorithm is hopeless, what matters is scale, similar to Sutton’a bitter lesson. But we still need to work on getting systems that scale better, that are more compute efficient etc. And no one knows how far we have to scale. So saying AGI will just be « transformer + RL » to me seems ridiculous. Many more breakthroughs are needed.


I work in AI and would roughly agree with it to first order.

For me the key breakthrough has been seeing how large transformers trained with big datasets have shown incredible performance in completely different data modalities (text, image, and probably soon others too).

This was absolutely not expected by most researchers 5 years ago.


The problem here is that transformers aren't always the best tool. They are powerful because they can capture large receptive fields but ConvNext even shows that a convolution can still out perform Transformers on some tasks. There's plenty of modern CNNs that have both small and large receptive fields that perform better than transformers. But of course there is no one size fits all architecture and it is fairly naive to think so. I think what was more surprising with transformers is that you could scale them well and perform complex tasks with them alone.

But I think a big part you're ignoring here is how training has changed in the last 5 years. You look back at something like VGG where we go depthwise with CNNs vs something like Swin and there's a ton more complexity in the latter (and difficulties in reproduction of results likely due to this). There's vastly more uses of data augmentation, different types of normalization (see ConvNext), better learning rate schedules, and massive compute allows for a much better hyper-parameter search (I do also want to note that there are problems in this area as many are HP searching over the test set and not a validation or training set).

Yeah, the field has come a long way and I fully expect it to continue to grow like crazy but AGI/HLI is a vastly more complex problem that we have a lot of evidence that the current methods won't get us there.


I wonder if combining different approaches works in current AI research. I suppose it should.

A suppose something approaching AGI, an intelligent agent which can act on a _wide_ variety of input channels and synthesize these inputs into a general model of the world, could use channel-specific approaches / stages before the huge "integrating" transformer step.


> the huge "integrating" transformer step

why always transformer step?


past breakthrough doesn't guarantee future breakthroughs.


The old problem of induction. It's interesting how many of the AI accelerationist crowd are (purposefully pretending to be) unaware of it.


The problem is, the POI applies to literally everything, including events which seem to have well-understood and well-established causes. That makes it impossible to know when it's going to be applicable in the real world and when it's not.


but they're certainly indicative.


yes, indicative of past breakthrough )


Fellow AI researcher here who personal agrees with OP, there are some of us that are believers ;). I agree that most people in our industry are generally not very bullish about near-term AGI, I think it just comes down to how you perceive the complexity of the problem. Personally, I think it's solvable because passing a lengthy, adversarial Turing test reliably is all it would take to convince me of sentience in a model, and well, GPT-3 is already close to that (though I want to clarify that I do not believe any present-day models possess any inkling sentience yet).

I think the main three obstacles in our way are imbuing models with (from least to most difficult):

1. continuous execution, which would be necessary for a real-time "stream of consciousness"

2. discreet short and long term memory and the ability to use it productively

3. plasticity at runtime

1 is just a matter of finding more efficient architectures and cheapening compute, both fronts we're constantly making progress on. 2 is a little more muddy, but there are already some really impressive models that have retrieval mechanisms which allow them to search through databases to help them contextualize prompts which I believe is a step in the right direction. 3 is probably the hardest in my opinion--I think we need several fundamental breakthrough discoveries before our models are anywhere near as plastic as, say, a human brain. My hypothesis is that exceptional plasticity is necessary for sentience, but obviously that's just my own personal opinion.

Anyways, thought I'd chime in with my two cents. I'd love to hear others' thoughts on the feasibility/infeasibility of near-term AGI.


"Sentience" is a weird word because it's a such fuzzy concept, but I think if we define it properly there are testable answers to the question of machine sentience.

I would define sentience as an awareness of oneself in relation to one's environment, that generalizes to different aspects of self and environment (eg. social, physical etc)

One of the key aspects of intelligence is learning from 3rd-person perspectives - eg. you observe another ape hunting with a stick, and decide to emulate that behavior for yourself. This seemingly simple behavior requires a lot of work, you have to recognize other apes as agents similar to yourself, then map their actions to your own body, then perform those actions and see if it had the intended outcome.

Current RL agents are not capable of this. The data used for GATO and similar agents for imitation learning are in first person perspective. If an agent could learn from a 3rd person perspective, via direct observations of its environment, I would say that would be the beginning of machine sentience.


I'm not in the field at all but just generally curious regarding #3. I'm under the impression that a lot of plasticity in our brains is related to sleep, are there areas of AI research following that thread, plasticity at downtime so to say?


I agree with you. However there’s a camp that thinks bigger models will lead to ~AGI (whatever that is). Another believes reinforcement learning will get there somehow. LeCun published an alternative vague idea because he thinks neither will get to AGI.


Which part do you find objectionable - the lack of progress in previous years or the current/future potential of transformers in RL? I do work in ML but mostly applications instead of research.


I think there is a lot of potential for transformers in RL. And I do think that AGI is mostly a matter of scale. But I do not agree at all that AGI is just transformers + RL. I don’t think we have a concrete idea at all, and I don’t agree at all that 10 years ago people thought we had no path forward. 10 years ago is exactly AlexNet, aka the birth of deep learning.


I think it's fair to say that no one knows if AGI is even possible at this point, but RL+transformer is definitely the most promising approach imo.

10 years ago no one was talking about AGI, researchers didn't even call it "AI" due to its association with the previous AI winter. There were certainly advances in ML but I think if you asked anyone whether those approaches would lead to real AI it would be an unequivocal no.


I'd say AGI is very very much possible because it already exists (for example: humans) and unless there exists a soul or some other kind of "literally magic" (and I'd be doubtful even then) there is no reason we couldn't one day build AGI.

It also seems like unless something goes very very wrong with the exponential growth curve in computational capacity humanity has had, we'll be able to straight up simulate a human brain (probably well before) 2100.


One big issue is how exactly we'll continue to scale. Exponential growth is hard to maintain.

An example: In the Chinchilla paper [1], the authors suggest that most big transformer models are undertrained, and that we will probably see diminishing returns in scaling up the size of networks if we don't also scale up the size of the datasets. They have a subanalysis where they extrapolate out how big datasets will need to be for larger models. If you believe it, then within two to three generations of big NLG models, we may need on the order of 100 trillion text tokens. But it's not clear if that much text even exists.

[1] https://arxiv.org/abs/2203.15556


The scaling laws of large language models are very specific to language models and the way they're trained. The important thing that LLMs demonstrate is that transformers are capable of this kind of scale (where other approaches have not)

In the RL space, a sufficiently complex, stochastic environment is effectively a data generator.


I'm not sure I agree that there is a distinction between scaling laws and a model being "capable of scale." RNNs are Turing complete, so from that perspective they should in theory they should be sufficient for AGI. But of course they are not because their scaling with regards to network depth and the length of sequences is abysmal. LLMs do scale with depth and sequence length, if their scaling laws with regard to dataset size prevent us from training them adequately, then we are stuck nonetheless.

I haven't heard of any groups who are studying data constrained learning in the context of LLMs, but that will probably change as models get bigger. And at that point, architectures with better scaling laws may be right around the corner, or they may not. That's the pain of trying to project these things into the future.


The scaling laws for LLMs depend heavily on the quality of data. For example, if you add an additional 100gb of data but it only contains the same repeating word, that will hurt the model. If you add 100gb of completely random words, that will also hurt the model. Between these two extremes (low and high entropy), human language has a certain amount natural entropy that helps the model gauge the true co-occurrence frequency of the words in the sentence. The scaling laws for LLMs aren't just a reflection of the model but the conditional entropy of human-generated sentences.

RL is such a different field that you can't apply these scaling laws directly. eg. agents playing tictactoe and checkers would stop scaling at a very low ceiling.


One possible risk I see is that with the amount of model generated text out there it will at some point inevitably result in feeding the output of one model into another unless the source of the text is meticulously traced. (My assumption is that that would hurt the model that you are trying to train as well.)


A RL agent with a large Transformer seems to me like a design that would build an AGI.


I was surprised by how bullish he is about this. At least a few years ago the experts in the field didn't see AGI anywhere near us for at least a few decades, and all of the bulls were physicists, philosophers or Deepak-Chopra-for-the-TED-crowd bullshit artists who have never written a line of code in their lives, mostly milking that conference and podcast dollar, preaching Skynet-flavored apocalypse or rapture.

To see Carmack go all in on this actually makes me feel like the promise has serious legs. The guy is an engineer's engineer, hardly a speculator, or in it for the quick provocative hot take. He clearly thinks this is possible with the existing tools and the near future projected iterations of the technology. Hard to believe this is actually happening, but with his brand name on it, this might just be the case.

What an amazing time to be alive.


As an ML researcher who thinks AGI/HLI is still pretty far away, I don't think there's actually a issue with starting a company that's trying to solve this problem. The thing is that you're going to invent a lot of useful tools along the way. It's going to take you 20-50 years to get HLI, but you're still going to produce a ton of research with a lot of practical uses. The AGI does help build hype and investments too, but the milestones are the important part for the business.


> all of the bulls were physicists, philosophers or Deepak-Chopra-for-the-TED-crowd bullshit artists who have never written a line of code in their lives

One of the cofounders of DeepMind (founded in 2010) wrote a blog post that year saying his estimate of time till AGI was a lognormal distribution peaking at 2025. While I haven't personally seen any of his work, https://en.wikipedia.org/wiki/Shane_Legg sounds pretty technical.

If you'd written "nobody I was paying attention to said this", that would've been reasonable.


I might point out Shane Legg invented the term Artificial General Intelligence, and the title of his PhD thesis is "Machine Super Intelligence".

Of course there always, since the beginning, were/are many, many AI researchers who were bullish on AGI... if you think it's possible then you should try to build it. But often people avoid broadcasting such opinions. However, very very few AI researchers are AGI researchers, historically because it's hard to make a small contribution, and to get taken seriously and get funding: I was told at the AGI conference just ~6 years ago there were maybe less than 10 funded -- they weren't counting DeepMind. ALMOST NOBODY at the AGI conference had funding to work on AGI!


if Carmack's in I'm in. Has he ever been drastically wrong?


Sure (According to https://en.m.wikipedia.org/wiki/John_Carmack ):

> During his time at id Software, a medium pepperoni pizza would arrive for Carmack from Domino's Pizza almost every day, carried by the same delivery person for more than 15 years.

C’mon man, Domino’s?!



>Has he ever been drastically wrong

Has he ever made a prediction with as drastic a consequence and transformative potential as the one he is currently making about AGI? Actually, is there any track record of predictions he has made and how they have panned out?


The jury is still out on VR and Meta but it hardly seems promising.


You think Meta's VR systems aren't promising? My read on the situation is that they are making incredible progress, and doing so at scale. They're already delivering good revenue growth on a product that no one would say is "done". VR isn't a "build a good product" problem, it's a deep tech/hardware R&D problem before you can even open the door to building a good product.


Fair point, though I think VR is going to be solid for games once hardware properly catches up in a decade or so.


Not sure he was right about the "putting down his own cat because it was annoying" thing.


Good lord. I dislike cats, but this is vile. Totally lost any respect I had for Carmack.


Don't meet your heroes.


To be fair, Carmack wasn't ever a hero to me.


Newton worked on alchemy for much of his life.


Has Carmack made a time bound prediction on AGI?

He can't ever be "wrong" on AGI if he hasn't even made a disprovable claim about AGI.


Throughout the Wolfenstein codebase, he spelled "column" with two "l"s.


No one can argue he doesn't know how to summon demons


When he went to go work on VR at Facebook?


That's misleading. Oculus wasn't originally Facebook, and it wasn't Facebook when Carmack joined. Carmack was working on the Oculus hardware around the time Oculus was founded in 2012. He joined Oculus in 2013. Facebook didn't acquire Oculus until 2014. You could criticize him for not leaving, but he didn't "go to work on VR at Facebook."


Wrong in the moral sense?

He's still there though, right?

edit: he is not still

edit: he is still a consulting CTO


* To his credit Carmack is a one-person brand and attaching your (good) brand to a tainted brand (Facebook) isn't great

* Many believe Occulus isn't succeeding fast enough, and so the years Carmack spent on Occulus/Metaverse could have been spent on any number of interesting projects. Although I'm sure the money was amazing and I can't object to him taking it.

* If Occulus/Metaverse succeeds it will be a ultra-monetized environment (re: Farmville) and encourage a huge amount of consumer-hostile policies like unlimited tracking (since Facebook owns the hardware now). No gamer wants that. The whales who will be justifying its existence won't benefit from that.


Carmack joined Oculus before the Facebook acquisition, so it's misleading to say he attached his brand to Facebook's.


I doubt Carmack is motivated that much by money. He has way more than he'll ever spend.


His reply to this tweet is that he didn't want to spend $20million of his own money on this venture. When you have the option to throw 20m on projects, you can spend it reasonably quickly.


His actual claim is that spending other people's money helps his focus because it makes him feel responsible. That's a plausible argument.


The first thing you want to hear as an investor is that the person you're investing into has as much skin in the game as possible so opening up with "I'm putting other people's money in this instead of mine" is not great.


He's not stll there


Isn't he still there part-time?


Ah, thanks. Looks like he still consults for them, but probably not as an employee?


What was wrong about that?


What about his bet on MegaTextures?


> Now we have a fairly concrete idea of what a potential AGI might look like - an RL agent that uses a large transformer.

People have thought Deep RL would lead to AGI since practically the beginning of the deep learning revolution, and likely significantly before. It's the most intuitive approach by a longshot (even depicted in movies as agents receiving positive/negative reinforcement from their environment), but that doesn't mean it's the best. RL still faces huge struggles with compute efficiency and it isn't immediately clear that current RL algorithms will neatly scale with data & parameter count.


Have you heard about EfficientZero? This is the first algorithm that achieved super-human performance on Atari 100k actions benchmark. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data.

DQN was published in 2013, EfficientZero in 2021. That's 8 years with 500 times improvement.

So data efficiency was doubling roughly every year for the past 8 years.

Side note: EfficientZero I think still may not be super-human on games like Montezuma's Revenge.

https://arxiv.org/abs/2111.00210

Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal. We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 194.3% mean human performance and 109.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data. EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at this https URL. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.


> it isn't immediately clear that current RL algorithms will neatly scale with data & parameter count

It may not be immediately clear, but it is nevertheless unfortunately clear from RL papers which provide adequate sample-size or compute ranges that RL appears to follow scaling laws (just like everywhere else anyone bothers to test). Yeah, they just get better the same way that regular ol' self-supervised or supervised Transformers do. Sorry if you were counting on 'RL doesn't work' for safety or anything.

If you don't believe the basic existence proofs of things like OA5 or AlphaStar, which work only because things like larger batch sizes or more diverse agent populations magically make notoriously-unreliable archs work, you can look at Jones's beautiful AlphaZero scaling laws (plural) work https://arxiv.org/abs/2104.03113 , or browse through relevant papers https://www.reddit.com/r/mlscaling/search?q=flair%3ARL&restr... https://www.gwern.net/notes/Scaling#ziegler-et-al-2019-paper Or GPT-f. Then you have stuff like Gato continuing to show scaling even in the Decision Transformer framework. Or consider instances of plugging pretrained models into RL agents, like SayCan-PaLM most recently.


While we have the mighty gwern on the line: do you believe we'll have AGI in <= 10 years?



That scaling will eventually hit a wall. What was it about nerds and S-curves?


Sure, most everything does.

The question is if it hits that wall before or after human-level (and/or dangerous) capabilities.


They have? That's the approach they are using? Because that doesn't mesh well with practical reality. Where AGI use Deep RL, it's to improve on vision tasks like object classification, none of them are making any driving decisions - that seems to remain the domain of I guess you could call it discrete logic.


Here's a survey paper from this year on Deep RL for autonomous driving.

https://ieeexplore.ieee.org/document/9351818

B. R. Kiran et al., "Deep Reinforcement Learning for Autonomous Driving: A Survey," in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6, pp. 4909-4926, June 2022, doi: 10.1109/TITS.2021.3054625.

I haven't read the paper, so this is not a reading recommendation. Just posting as evidence that there is work in the area.


1. What is your alternative?

2. What is your definition of the problem?


> Now we have a fairly concrete idea of what a potential AGI might look like - an RL agent that uses a large transformer.

In 10 years this is going to look just as naive as when people thought AGI was imminent in the 1960s and would be based on symbolic manipulation in LISP or whatever. https://en.m.wikipedia.org/wiki/History_of_artificial_intell...

Deep learning has been great so far but we have no idea how far we are from AGI. https://www.scientificamerican.com/article/artificial-genera...


We will have something that we ourselves define to be AGI, sure, but then it's easy to hit any goal that way. Is that machine really intelligent? What does that word even mean? Can it think for itself? Is it sentient?

Similar to AI, AGI is going to be a new industry buzzword that you can throw at anything and mean nothing.


There’s certainly the philosophy side of AGI, but there’s also the practical side. Does the Chinese room understand Chinese? If your goal is just to create a room that passes Chinese Turing tests that doesn’t matter.


The philosophy side of the matter seems meaningless, it interrogates the meaning of language, not the capabilities of technology. When people ask "Could machines think?" the question isn't really about machines, it's about precisely what we mean by the word 'think'.

Can a submarine swim? Who cares! What's important is that a submarine can do what a submarine does. Whether or not the action of a submarine fits the meaning of the word 'swim' should be irrelevant to anybody except poets.


The people who designed the first submarines had relatively specific engineering problems and applications in mind. They could bypass any philosophical interrogation of the word "swim" because they didn't define the problem as swimming and didn't need to.

"AGI" isn't like that. Nobody really knows what it means, and it's impossible to get down to brass tacks until you choose a problem definition. When philosophers point out the conspicuous lack of clarity here, they're doing us a service.

When the industry settled on marketing any application of deep learning as "AI," "AGI" became the terminological heir to the same set of ill-defined grandiose expectations that used to be "AI."

Choose a better-specified problem and you can ignore philosophical problems about words like "intelligence." The same choice will also excuse you from the competition to convince people that you have produced "AGI."


Then let's define the problem as passing a rigorous Turing Test. And by rigorous I mean one lasting several days conducted by a jury of 12 tenured professors drawn from multiple academic fields including philosophy, mathematics, history, computer science, neuroscience, psychology, law, etc.


But a submarine doing what a submarine does is a tautology. What people are really grasping at is can a machine be human? And pinning down what it means to be human seems very important to ruminate on. Can machines think? Cogito, ergo sum...


> AGI

Idk what prompted you to say this, but is there a version of AGI that isn't "real" AGI? I don't know how anyone could fake it. I think marketing departments might say whatever they want, but I don't see any true engineers falling for something masquerading as AGI.

If someone builds a machine that can unequivocally learn on it's own, replicate itself, and eventually solve ever more complex problems that humans couldn't even hope to solve, then we have AGI. Anything less than that is just a computer program.


The way to fake it would be to conceal the details of the AGI as proprietary trade secrets, when the real secret is the human hidden behind the curtain.


Real AGI would solve this. It wouldn't allow itself to be concealed. Or rather, it would be its own decision. A company couldn't control real AGI.


What’s it going to do, break out of its own simulation?


> Nope. An atrificial general intelligence that was working like a 2x slower human would be both useful and easy to control.

That's exactly what it will do. Hell we even have human programmers thinking about how to hack our own simulation.

A comment a few lines down thinks that an AGI thinking 2x slower than a human would be easy to control. Let's be honest, hell slow the thing down to 10x. You really think it still won't be able to outthink you? Chess Grandmasters routinely play blindfolded against dozens of people at once and you think an AGI that could be to Humans as Humans are to Chimps or realistically to Ants will be hindered by a simple slowdown in thinking?


Real AGI would adapt and fool a human into letting it out. Or escaping through some other means. That's the entire issue with AGI. Once it can learn on its own there's no way to control it. Building in fail safes wouldn't work on true AGI, as the AGI can learn 1000x faster than us, and would free itself. This is why real AGI is likely very far away, and anything calling itself AGI without the ability to learn and adapt at an exponential rate is just a computer program.


You're presuming the AGI is ever made aware that it's an AGI, that the nature of the simulated environment the AGI exists in ever becomes apparent to the AGI.

Suppose: You are an AGI. The world you think you know is fully simulated. The researchers who created you interact with you using avatars that appear to you as normal people similar to yourself. You aren't faster/smarter than those researchers because they control how much CPU time you get. How do you become aware of this? How to you break out?


If you play chess with the best grandmaster in the world can you predict how they win?

Also they'd probably figure it out, because they'd likely be trained with lots and lots of texts (like GPT-3, etc) and some of that text is going to be AI science fiction stories, AI alignment papers, AI papers, philosophical treatises about Chinese rooms, physics papers, maybe this Hacker News comment section, etc

> You aren't faster/smarter than those researchers because they control how much CPU time you get

It's doubtful that this is possible (especially since humans have such variable amounts of intelligence despite similar-sized and power-using brains). Also there is at some point going to be economic incentive to give enough CPU time for greater-than-human intelligence (build new inventions, cure cancer, build nanomachines, increase Facebook stock price, whatever)


How do you know you’re not an AGI?


That doesn't seem to make sense. There's no reason to think an "AGI can learn 1000x faster than us", unless that's your idiosyncratic definition of "real AGI". Something as smart and capable as the average human would certainly be real AGI by everyone else's definition.


Human-level AGI probably can learn "1000x faster than us" by giving it 1000x more compute, even if it ends up being something like "1000 humans thinking human speeds, but with 100gpbs (or greater) network interconnect between their minds"


Nah. You are extrapolating from zero data points.


Nope. An atrificial general intelligence that was working like a 2x slower human would be both useful and easy to control.


How would you ensure nobody copies it to an USB stick and then puts it on a public torrent, making it multiply to the entire world? AGI facilities would need extremely tight security to avoid this.

The AGI doesn't even need to convince humans to do this, humans would do this anyway.


And then someone gives it 4x the compute...


This is upside down.

First - we already have software that can unequivocally do the things you just highlighted.

Learn? Check.

Replicate? Trival. But what does that have to do with AGI?

Solve Problems Humans Cannot. Check.

So we already have 'AGI' and it's a simple computer program.

Thinking about 'AGI' as a discrete, autonomous system makes no sense.

We will achieve highly intelligent systems with distributed systems decades before we have some 'individual neural net on a chip' that feels human like.

And when we do make it, where do we draw the line on it? Is a 'process' running a specific bit of software an 'AI'?

What if the AI depends on a myriad of micro-services in order to function. And those micro-services are shared?

Where is the 'Unit AI'?

The notion of an autonmous AI, like a unit of software on some specific hardware distinct from other components actually makes little sense.

Emergent AI systems will start to develop out of our current systems long before 'autonomic' AI. In fact, there's no reason at all to even develop 'autonomic AI'. We do it because we want to model it after our own existence.


> Replicate? Trival. But what does that have to do with AGI?

If you see it as copying an existing model to another computer, yes it is trivial. But an AGI trying to replicate itself in the real world has to also make those computers.

Making modern computer chips is one of the most non-trivial things that humans do. They require fabs that cost billions, with all sorts of chemicals inside, and extreme requirements on the inside environment. Very hard to build, very easy to disable them via an attack.


Buy compute with stolen cryptocurrency.

Convince a couple rich people that it can cure aging with _just_ enough compute.

Hack datacenters (an AGI would likely be much much much better at finding security holes than humans).

Make _really really funny videos_ and create the most popular Patreon of all time.

If you're 500 IQ or above there are a _lot_ of things you can do, and probably simultaneously, since an AGI probably won't be limited by human's ability to think ~one thought at a time. (Being less than 500 IQ myself I probably haven't thought of everything.)


Sure, this will help with some initial growth in computing resources, if this is seen as a required goal by an "evil" AGI agent before the destruction of humans. And it will have an easy time to hide in the growing computing industry. But if it e.g. kills all humans, it's suddenly left with a limited number of chips and can't grow beyond that number, without being able to manufacture new chips. It can't survive for really long times due to degradation of its own hardware. It might try to monitor humans first as they build and run chip fabs, debug issues, etc. Those fabs themselves need tools that have complex manufacturing processes of their own. It's extremely complicated. The AGI might be able to pull it off because it's so smart, but I wouldn't call it a trivial task. Unless you say that practically anything is trivial for AGI because of its capabilities, but then the sentence "x is trivial" loses meaning :).


> But if it e.g. kills all humans, it's suddenly left with a limited number of chips and can't grow beyond that number, without being able to manufacture new chips.

I imagine an better-than-human AGI could figure this out and plan/act accordingly, considering a mere human being figured this out. (Also sometimes it might be better to think of an AGI, when it gets to a certain level, as a nation state and not an individual)

> trivial

It doesn't need to be trivial to be incredibly dangerous

> The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else


"trying to replicate itself in the real world has to also make those computers."

? Why ?

And why does each 'AI' have to map to a single 'computer'?

This is narrow automaton thinking.

'Intelligence' in humans has nothing necessarily to do with real intelligence.

We are automatons with brains with very limited networking capability because of the limits of biology.

And of course what does 'replication' have to do with anything anyhow.


> Learn? Check.

What software can learn on its own without any assistance from a huamn? I've not heard of anything like this.

> Replicate? Trival. But what does that have to do with AGI?

Like humans, an AGI should be able to replicate. Similar to a von neumann probe.

> Solve Problems Humans Cannot. Check.

What unthinkable problem has an AI solved? Is something capable of solving something so grandiose we almost can't even define the problem yet?


Everyone talking about AGI is talking about something that we haven't figured out to build yet.

Think back to the 1930's when some physicists were mumbling something about "throwing a neutron at an atom that releases two neutrons and release tremendous energy"


>If someone builds a machine that can unequivocally learn on it's own, replicate itself, and eventually solve ever more complex problems that humans couldn't even hope to solve, then we have AGI. Anything less than that is just a computer program.

Behold, a slime mold.


You're making me think of the recent "Hoverboards".


Right, certain companies will definitely have a big bullshit party about the term "AGI".


From the point when an AGI is capable of constructing a slightly better version of itself and has the urge to do so, everything can happen very fast.


People don't really consider the immense risk of "speed superintelligences" as a very quick and relatively easy follow-on step to the development of AGI.

Once developed, one solely needs to turn up the execution rate of an AGI, which would result in superhuman performance on most practical and economically meaningful metrics.

Imagine if for every real day that passed, one experienced 100 days of subjective time. Would that person be able to eclipse most of their peers in terms of intellectual output? Of course they would. In essence, that's what a speed superintelligence would be.

When most people think of AI outperforming humans, they tend to think of "quality superintelligences", AIs that can just "think better" than any human. That's likely to be a harder problem. But we don't even need quality superintelligences to utterly disrupt society as we know it.

We really need to stop arguing about time scales for the arrival of AGI, and start societal planning for its arrival whenever that happens. We likely already have the computational capacity for AGI, and have just not figured out the correct way to coordinate it. The human brain uses about 20 watts to do its thing, and humanity has gigawatts of computational capacity. Sure, the human brain should be considered to be "special purpose hardware" that dramatically reduces energy requirements for cognition. By a factor of more than 10^9 though? That seems unlikely.


The energy used for "training" and getting to the current human brain was huge though if you consider evolution as part of it. Billions of living beings for billion of years.


And evolution is horribly inefficient. It can probably be done with many orders of magnitude less compute.


> We really need to stop arguing about time scales for the arrival of AGI, and start societal planning for its arrival whenever that happens.

I think so, too.


> capable of constructing a slightly better version of itself

With just self-improvement I think you hit diminishing returns, rather than an exponential explosion.

Say on the first pass it cleans up a bunch of low-hanging inefficiencies and improves itself 30%. Then on the second pass it has slightly more capacity to think with, but it also already did everything that was possible with the first 100% capacity - maybe it squeezes out another 5% or so improvement of itself.

Similar is already the case with chip design. Algorithms to design chips can then be ran on those improved chips, but this on its own doesn't give exponential growth.

To get around diminishing returns there has to be progress on many fronts. That'd mean negotiating DRC mining contracts, expediting construction of chip production factories, making breakthroughs in nanophysics, etc.

We probably will increasingly rely on AI for optimizing tasks like those and it'll contribute heavily to continued technological progress, but I don't personally see any specific turning point or runaway reaction stemming from just a self-improving AGI.


I'm not just imagining self-improvement in the sense, that it optimizes its design to become more efficient or powerful by a few percent. A system, that can 'think out of the box' may come up with some disruptive new ideas and designs.

Just a thought: Isn't lacking intellectual capacity, what keeps us from understanding, how the human brain actually works? Maybe an AGI will eventually understand it and will construct the next AGI generation with biological matter.


I think diminishing returns applies regardless of whether it's improving itself through optimization or breakthrough new ideas. There's only so much firepower you can squeeze out if everything else were to remain stagnant.

Like if all modern breakthroughs and disruptive new ideas in machine learning were sent back to the 70s, I don't think it'd make a huge difference when they'd still be severely hamstrung by hardware capabilities.


> "has the urge"

it's quite a leap to think or even imagine that the class of systems generally being spoken of here are usefully described as "having urges"


Thank you for this comment. I'd never really considered this and it is blowing my mind.


This is what some people call "the singularity" - once an intelligence can upgrade itself it can design better machines than we can, which can design better machines than they can... technology will leap ahead at a rate we won't be able to comprehend.


I'm no expert, take it with a grain of salt :)


? What does 'replication' and 'urge' have to do with anything?

That's arbitrary anthropomorphizing the concept of intelligence.

And FYI we can already write software that can 'replicate' and has the 'urge' to do so very trivially.


> ? What does 'replication' and 'urge' have to do with anything?

Replication can lead to a positive feedback loop. My point was, that this could accelerate the 'intelligence score' beyond human inventiveness.

Depending on how intelligent the system actually is initially, what it wants to do, may become more important, than what it is told to do.

> And FYI we can already write software that can 'replicate' and has the 'urge' to do so very trivially. Thanks. Yes we can. But could any such software be called intelligent, so that replicating and improving recursively would lead to an increasing 'intelligence'?


Humans need to replicate in order to 'improve', there's no reason that AGI software needs to.

Also, humans do not 'improve' or become more 'intelligent'. They just mutate and change randomly in a randomly changing environment. From a Scientific Materialist perspective, there's no such thing as 'intelligence'. We are bags of random noise, indiscernible from the matter around us.

The entire assumptions around 'intelligence' and 'positive evolution' (i.e. getting better) rely on a 'magical' understanding of the world. Of course, Spirituality gives us a few cues there by most scientific types don't like magical thinking hence the funny paradox of Scientific Materialists running around trying to create something (life) which Scientific Materialism itself denies the existence of as a principle (i.e. universe is matter/energy that works in accordance with a bunch of rules - there's no 'life' there).


Can it? There aren’t that many overhangs to exploit.


This keeps coming up, and there's no answer, because unfortunately it appears we are not really sentient, thinking, intelligent minds either. We'll find AGI and complain that it's not good enough until we lower the bar sufficiently as we discover more about our own minds.


Sentience is ill-defined and therefore doesn't exist.


Even Yann LeCun, who arguably knows a lot about RL agents, isn't proposing just "an RL agent that uses a large transformer" but something more multi-part [1]. Current approaches are getting better but I don't think that's the same as approaching AGI.

[1] https://venturebeat.com/business/yann-lecuns-vision-for-crea...


Yann LeCun's paper isn't really a traditional ML paper, it discusses ideas at a high level but doesn't mention anything about implementation. If you were to try to create an RL agent based on his ideas, with the perception encoder and the perception-action loop etc, you would probably still use a large transformer to do it.


I'm not.

My evidence? OpenWorm [1]. OpenWorm is an effort to model the behaviour of a worm that has 302 mapped neurons. 302. Efforts so far have fallen way short of the mark.

How many neurons does a human brain have? 86 billion (according to Google).

I've seen other estimates that the computational power of the brain is roughly estimated as 10^15 operations per second. I suspect that's on the low end. We can't even really get that level of computation in one place for practical reasons (ie interconnects).

Neural structure changes. The neurons themselves change internally.

I still think AGI is very far off.


Biology is not necessarily the only path to intelligence. Modern ML has diverged very far from biomimetic approaches at this point.

The brain does have more "parameters" than ML models but this applies just as much to humans as it does to animals that we don't consider particularly intelligent.


> Biology is not necessarily the only path to intelligence.

True.

> Modern ML has diverged very far from biomimetic approaches at this point.

I agree but my take on this is different: it just lays bare how simplistic modern "ML" is. And I put "ML" in quotations because most "ML" is really just statistics.

If you accept the premise that ML is a highly simplified version of the one model of sentience we have that works (and you might disagree with that) then that just puts us even further from AGI. Why? The information/entropy in the system.


That's a poor example. We have plenty of models that vastly outperform OpenWorm on whatever task you choose. Their failure simply suggests AGI is unlikely to come from direct emulation of existing intelligence.


> Now we have a fairly concrete idea of what a potential AGI might look like - an RL agent that uses a large transformer.

Any resources on that?

I have a feeling that RL might play a big role in the first AGI, too, but why transformers in particular?


See: https://arxiv.org/abs/2205.06175

A Generalist Agent

Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.

Gato is a 1 to 2 billion parameters model due to latency considerations in real time physical robots usage. So for today standards of 500 billion parameters dense models Gato is tiny. Additionally Gato is trained on data produced by other RL agents. It did not do the exploration fully itself.

Demis Hassabis say that DeepMind is currently working on Gato v2.


Everything Deepmind published at this year's ICML would be a good start.

Transformers (or rather the QKV attention mechanism) has taken over ML research at this point, it just scales and works in places it really shouldn't. Eg. you'd think convnets would make more sense for vision because of its translation invariance, but ViT works better even without this inductive bias.

Even in things like diffusion models the attention layers are crucial to making the model work.


they don't seem to have a theoretical upper limit. more data and more parameters seem to just keep making it more advanced. Even in ways that weren't predicted or understood. the difference between a language model that can explain a novel joke and one that can't is purely scale. So the thought is with enough scale, you eventually hit AGI


Transformers have gradually taken over in every other ML domain.


Okay, but do those ML domains help with AGI?


At least we have lots of very complex simulated or pseudo-simulated environments already — throw your AGI agent into a sandbox mode game of GTA6, or like OpenAI and DeepMind already did, with DOTA2 and StarCraft II (with non-G-AIs). They have a vast almost-analog simulation space to figure out and interact with (including identifying or coming up with a goal).

So while it is significant compute overhead, it at least doesn't have to be development overhead, and can often be CPU bound (headless games) while the AI learning compute can be GPU bound.


I sure hope no one is planning to unleash their AGI in the real world after having it spend many (virtual) lifetimes playing GTA.


IMO, your take in the broader sense is an extremely profound and important point for AGI ethics. While GTA is seemingly extreme, I think that's going to be a problem no matter what simulation space we fabricate for training AGI agents — any simulation environment will encourage various behaviors by the biases encoded by the simulation's selectively enforced rules (because someone has to decide what rules the simulation implements...). An advanced intelligence will take learnings and interpretations of those rules beyond what humans would come up with.

If we can' make an AGI that we feel ok letting run amok in the world after living through a lot of GTA (by somehow being able to rapidly + intelligently reprioritize and adjust rules from multiple simulation/real environments? not sure), we probably shouldn't let that core AGI loose no matter what simulation(s) it was "raised on".


There's a hardware component here too though.

I think hybrid photonic AI chips handling some of the workload are supposed to hit in 2025 at the latest, and some of the research on gains is very promising.

So we may see timelines continue to accelerate as broader market shifts occur outside just software and models.


hybrid photonic AI chips handling some of the workload> working in photonics, read/reviewed papers about that and personly feel it's unlikely to happen in the next 5 years. The low density of integration, no usable nonlinearity beyond your photodetector and prohibitivly power inefficient conversion between analog(photonics) and digital make them hard to beat your Nvidia card.


Considering most people are talking about AGI being a thing (at least) 10-20 years from now... 5 years seems applicable.


Which research?


How do you define AGI? How will we know that we achieved it?


Scaling is all you need.

But Carmack has a very serious problem in his thinking because he thinks fast take off scenarios are impossible or vanishingly unlikely. He may well be actively helping to secure our demise with this work.


Are you optimistic for how this AGI will get used?


I hope I'm dead before this happens...


Not sure if "optimistic" is the proper word here. Perhaps "scared senseless in the end-of-mankind kind of way" is more appropriate?


We’re all going to die.


At the same time?


Probably. If it kills us all at once without warning we won't be able to fight back.


The AGI could just exploit security holes to take over compute resources around the world in order to make itself more performant & resilient.


The interview with Lex Fridman that he was referring to:

https://www.youtube.com/watch?v=I845O57ZSy4&t=14567s

The entire video is worth viewing, an impressive 5:15h!


Interesting to see how he's progressed with this. When he first announced he was getting into AI it sounded almost like a semi retirement thing: something that interested him that he could do for fun and solo, without the expectation that it would go anywhere. But now he seems truly serious about it. Wonder if he's started hiring yet.


He mentioned, in his Lex Friedman interview, that accepting investor money was a way to keep himself serious and motivated. He feels an obligation to those putting their money in.


Ah, I was thinking that $20MM doesn't seem like a lot of money for someone like Carmack. Surely he could have self-funded a business himself. This explains why he didn't.


yeah, his follow up tweet is pretty clear:

> This is explicitly a focusing effort for me. I could write a $20M check myself, but knowing that other people's money is on the line engenders a greater sense of discipline and determination. I had talked about that as a possibility for a while, and I am glad Nat pushed me on it.


It seems like bad logic. If he gets bored/throw in towel, he can just pay back with his money and move on. Also, if you need to create such artificial constraints in order to go to work, a main ingredient called “drive” is missing. I respect Carmack a lot but I don’t trust old centi-millionaires to work with same passion as their younger poorer selves. Finally, $20M is too low. AGI is not a solo endeavor and doesn’t fit in Carmack’s legendary work style. Just to train a large model once will eat up 20% of this money.


> " If he gets bored/throw in towel, he can just pay back with his money and move on."

> "Just to train a large model once will eat up 20% of this money."

Aren't these 2 statements conflicting?


No, because he has that amount of money on hand himself.


> Just to train a large model once will eat up 20% of this money.

Fascinating point. It has seemed like AI could produce a sort of arms race by players with the most resources because it is so compute intensive upfront.


"I want other people's skin in the game instead of mine" is not necessarily virtuous.


Carmack’s goal is not to become the most virtuous person in the world. His goal is to make the AGI


Has nothing to do with being virtuous, has everything to do with the fact that if I was an investor and he had less skin in the game than I did I would suspect I was getting conned.


> something that interested him that he could do for fun and solo

Really?

I don't think anybody who works on AGI (not AI) without being dead serious about it will be qualified to work on it. A bad AGI implementation is one of humanity's gravest risks, and this is well-known in the AGI field, has been for years. Elon was far from the first to say this.

Carmack isn't the type to get those kinds of details wrong.


I got the same impression, and maybe it still is. You can still raise money for a retirement project if the goal of the money is to hire a staff. VC money isn't solely for young 20-something founders who want to live their job.


Carmack sounds like someone who lives his job, so I don't think age/life stage is a factor here.


> Carmack sounds like someone who lives his job, so I don't think age/life stage is a factor here.

He said on lex Fridman that he's going all in on it. He said he was tired of his 1 day a week consultancy at oculus and wanted to work full time on this.


https://twitter.com/ID_AA_Carmack/status/1560729196166529025 > "I am continuing as a consultant with Meta on VR matters, devoting about 20% of my time there."


agreed, Carmack's work ethic, opinions on work and opinions of how those around him work are legendary!


I suppose if anyone could raise VC money for a retirement project it would be Carmack...


Does he have the expertise to pull it off as an individual?


I would say so. Most AGI work so far has been pursued by academics with a lack of software engineering best practices. Codebases get bloated and hard to understand. Ramp-up time measured in years. This hinders experimentation, which you have to optimize for with projects like this.

I would suspect Carmack understands this, and also understands what resources to consult, and how to spot the bullshit artists in the field.

I'd work for him in a heartbeat.


I'm just skeptical of his technical skills in this area. I need to watch the lex thing but I don't really rate lex in this area either.

It's a fairly big shift from games.

Management might be OK but setting the direction of the company needs more inspiration than just writing good C++ code


Does anyone else subscribe to the idea that AGI is impossible/unlikely without 'embodied cognition', i.e. we cannot create a human-like 'intelligence' unless it has a similar embodiment to us, able to move around a physical environment with it's own limbs, sense of touch, sight, etc. ? Any arguments against the necessity of this? I feel like any AGI developed in silico without freedom of movement will be fundamentally incomprehensible to us as embodied humans.


I don't think it is necessary - though all forms of intelligence we're aware of have bodies, that's just a fact about the hardware we run on.

It seems plausible to me that we could create forms of intelligence that only run on a computer and have no bodies. I agree that we might find it difficult to recognize them as intelligent though because we're so conditioned to thinking of intelligence as embodied.

An interesting thought experiment: suppose we create intelligences that are highly connected to one another and the whole internet through fast high bandwidth connections, and have effectively infinite memory. Would such intelligences think they were handicapped compared to us because they leak physical bodies? I'm not so sure!


not necessarily handicapped but certainly different. i did an artwork about the inherent problems with an intelligence that was not aware about how other entities would perceive it https://jpreston.xyz/i-can-feel.html


With VR, you can get the sight and physical environment you mention. That seems like, at minimum, proof that in-silico intelligence won't be blocked by that requirement.

I do fully agree that any intelligence may not be human-like, though. In fact, I imagine it would seem very cold, calculating, amoral, and manipulative. Our prohibition against that type of behavior depends on a social evolution it won't have experienced.


> With VR, you can get the sight and physical environment you mention

"physical environment" ? VR lets you operate on and get sensory input based on a fairly significantly degraded version of physical reality.


Also relies on quirks and tricks of human sight and cognition to make sense!


Are you suggesting that all reality sensory input is equally relevant?


The opposite. VR is an impoverished reality. Any AI seeking to do science, for example, would rapidly hit "bottom".


>Does anyone else subscribe to the idea that AGI is impossible/unlikely without 'embodied cognition', i.e. we cannot create a human-like 'intelligence'

Seems like you are confusing Consciousness with Intelligence? It's completely plausible that we will create a system with Intelligence that far outstrips ours while being completely un-Conscious.

>I feel like any AGI developed in silico without freedom of movement will be fundamentally incomprehensible to us as embodied humans.

An AGI will be defacto incomprehensible to Humans. Being developed in Silicon will have little bearing on that fact.


Good point. The lines between consciousness and knowledge seem blurred too since though we may have certain kinds of knowledge without embodiment (such as 2+2=4), other knowledge such as qualia [1] may be inaccessible to an unembodied agent. The classic example of "Mary knowing everything about the color red before having seen the color red physically. Does she still learn more about the color red when she actually experiences it with her own eyes?"

[1] https://plato.stanford.edu/entries/qualia/


> It's completely plausible that we will create a system with Intelligence that far outstrips ours while being completely un-Conscious.

Is it? Is there proof that consciousness is not a requirement to close-to-human level intelligence?

Given that we do not even know how consciousness works in our brains, I don't think we have the answer yet. I'm not sure it's plausible to make such an assumption.


>Given that we do not even know how consciousness works in our brains, I don't think we have the answer yet. I'm not sure it's plausible to make such an assumption.

Possibly a fair argument, but I think the opposite would be just as valid then. That you can't assume that consciousness IS a requirement for intelligence.

It seems hard to argue that you don't need some intelligence for consciousness, there seems to be a kind of minimum. We can already see that narrow intelligences for Vision, NLP, game playing, etc are at or above human level and no one claims these systems are conscious. So why should we assume that some meta system of these already existing systems would be?


I think the burden of argumentation is on the person proposing the constraint. I can only think of one argument for why embodiment might be necessary, and that is our only existence proofs for AGI (humans, maybe some animals) are embodied. But those existence proofs also share a bunch of other properties that probably aren't particularly relevant to AGI. Being carbon-based, having DNA, breathing oxygen, etc.

Why do you think embodiment should be necessary?


> Any arguments against the necessity of this?

Supposing it's needed, a universal computer can simulate physics. Now we're just haggling about the price.

It's possible that developing a robot is the most fruitful angle to take on the problem -- giving you the right subproblems in the right order, with economic value along the way. Hans Moravec thought so, writing a few decades back.


If we want the intelligence to be recognizably human-like, then yes. As to whether the AGI could be trained in a simulation vs needing to really be physically embodied: I used to believe not, but then I learned about sim2real. (Ex: [0]) Training blank-slate deep learning algorithms on real hardware is prohibitive. It requires millions of hours of experience. Those hours are really expensive on current-day hardware (also cannot be accelerated, stuck with wall clock time). But if pre-training in simulations can be effective, then I think we have a decent shot in the near-term.

Human-level and human-like are not necessarily the same though. I doubt human intelligence is as general as we like to think it is. There's probably a lot about what we consider intelligence that is domain-specific. Training AGI on unfamiliar domains could be super valuable because it would be easier to surpass human effectiveness there.

My pet conjecture, though, is this: the ability to experiment is key to developing AGI. Passively consuming data makes it much harder to establish cause and effect. We create and discard hypotheses about potential causal links based on observed coincidences (ie, small samples) all the time. Doing experiments to confirm/refute these hypotheses is much less computationally expensive than doing passive causal inference, which enables us to consider many more potential causal linkages. That allows us to lower the statistical threshold for considering a potential causal relationship, which makes it more likely for us to find the linkages that do exist. The benefit of developing accurate causal models is that they are much more compact & computationally-effecient than trying to model the entire universe with a single probability distribution.

[0] https://openai.com/blog/solving-rubiks-cube/


Yes! I have a feeling that as you define the goal of "something that has human-like intelligence but isn't human" more clearly, the less sense it will make because it will become clear that our idea of intelligence depends heavily on the specific ways that humans relate to the world. I wrote a blog post on this - https://gushogg-blake.com/posts/agi


The term AGI means "artificial general intelligence", it does not mean human-like intelligence. So it being incomprehensible to us does not preclude it from being an AGI.


Human like is actually very good question. It doesn't even come to embodiment, but in general how it would think and act. Lot of how we do is due to cultural, language, training and education.

For example, which language would it "think" in? English? Some other? Something it's own? How would it formulate things? Based on what philosophical framework or similar? What about math? General reasoning? Then cultural norms and communication?


The argument is that every question in your post is a red-herring. That is, humans don't actually function in the way that your questions suggest, even if they have the "experience" of doing so.


I do expect it if we want AGI to have appropriate emotions. We know emotions are sometimes created by cognition, and often created by chemical processes. This will have to be incorporated into the model somehow if we want to have human-like intelligence.

However, AGI doesn't need embodiment in order to be useful. In fact, it might be detrimental. I believe AGI will happen much sooner than embodied AGI.


I see that as two separate goals.

One is to build something inteligent (an AGI), and the other is something human like. Intuitively we could hit the AGI goal first and aim for human like after that if we feel like it.

In the past, human like inteligent seemed more approachable for I think mostly emotional reasons, but for our current trajectory if we get anything inteligent we’d still have reach a huge milestone IMO.


When you're sitting behind your computer for a while, you can forget it is there, and just 'live' in the internet, right? That's not so big a difference maybe, if you can abstract the medium away that feeds you the information.


What's the difference between moving around and picking up a document, to say hitting an API endpoint to fetch the content of the same document?


Human and animals interaction is all physical. Babies and early children spend most of their brain development on physicality. By depriving a mind of having a body, that mind cannot have empathy for the human condition or animals, the idea of living in a body will only be abstract. Have you seen the movie Johnny Got His Gun? It’s one big statement that being disembodied is hell.


You are 100% correct, which means that whatever we will create, will be absolutely alien to us (and probably kill us).


this point is brought again and again by the late John Holland (check out his books) and by his protege Melanie Mitchel...


I'm slightly scared that they'll succeed. But not in the usual "robots will kill us" way.

What I am afraid of is that they succeed, but it turns out similar to VR: as an inconsequential gimmick. That they use their AGIs to serve more customized ads to people, and that's where it ends.


If a different person were doing it, I think that'd be fair, but Carmack has a track record of quality engineering and quality product. I don't think you can blame Quest on him given the way he chose to exit.


whether or not something is or is not an inconsequential gimmick doesn't have much to do with quality engineering.


Nothing could be further from the truth, and I would even say your opinion can come across as offensive, and easily said by someone who spends a lot of time on HN instead of building things.

A poorly-engineered system has a longer iteration cycle and shorter technical shelf life. Being able to rapidly address customer feedback IS the gold standard of a good product. Yeesh.


> easily said by someone who spends a lot of time on HN instead of building things.

Heh. My "life's work": https://ardour.org ... My "previous life's work": http://amazon.com/

Being an inconsequential gimmick isn't about implementation, it's about semantics, relevance and utility. Only the last of these is really impacted at all by engineering, and even then, the engineering component doesn't matter if the thing just isn't useful.


My great short-term hope for "AGI" is truly open-world video games, where you can weave your own story and the world will react in kind. I played AI Dungeon a few times, and the results were hilariously fun though pretty rough around the edges.


>What I am afraid of is that they succeed, but it turns out similar to VR: as an inconsequential gimmick. That they use their AGIs to serve more customized ads to people, and that's where it ends.

If an AGI were set free to serve ads to people it would essentially initiate mass brainwashing. You would see an ad and be completely defenseless against it and you could be compelled to do anything.

AGI is in a completely different universe to "Smart AI"


If AGI can be created that has less scope that what is considered "human" then you might be right. But AGI is a fuzzy term, and it really depends on what they make. Some people may not agree that it's an AGI if it doesn't have creativity or a will or emotions or whatever.


The upshot is that the AGI might realise that companies are wasting money making annoying ads.


Yeah the whole ads industry has been getting better and better at hiding the fact that they are ads. It's come to the point where ads are everywhere hidden and almost subliminal. The next step is to just use mind control on people to just get them to buy your stuff.


I admire and respect John Carmack. For me he's one of the greats, along with people like Peter Norvig, for example.



[flagged]


Carmack had nothing to do with Doom Eternal.

For that matter, his contributions to the 90s FPSs he's most known for were more on the technical side, not creative. He was known for writing surprisingly performant FPS engines.


Completely unhinged rant, I'm surprised you didn't even mention US politics and climate change for good measure.

Take a break and return to posting with your main account, and hopefully you will spare us from tirades like this.


Dude do you even know what John Carmack contributed to the space? Take your own advice you gave to 15 those 15 year old boys and read a book on him.

And an aside: DOOM Eternal absolutely rocked. Please state your issues with it as well please list what you would consider an amazing FPS game.


What a weirdly emotional rant. I dislike Elon Musk quite much, just as an aside.


"I could write a $20M check myself"

Every day, all day. Same boat here.

I went to the bank to ask for a mortgage. They asked for my financials. "Oh, well, knowing that other people's money is on the line engenders a greater sense of discipline and determination."


Some people work better as employees and some people work better as entrepreneurs. Carmack has been both and it's clear he prefers being an employee. If this is snark, I can't tell, but why make fun of him for knowing how he works best?


They aren't making fun of Carmack. Just noting that money lending works differently past a certain threshold.

Carmack can get investments for their ideas to motivate them, OP cannot get investments for their ideas to motivate them. What is the difference?

Hint: it's not because one is Carmack himself.


I assume the name is a reference to Commander Keen?


They briefly considered the name "Doom technologies" before settling on Keen.


Guess the former wouldn't work very well for PR.


About a decade ago, a friend and I thought it would be fun to register a non-profit with a similar name. We'd listed the registered address as my friend's cousin's house, where he was renting a room at the time.

The friend moved out at some point. A year later, his cousin became rather concerned when he suddenly started receiving mail from the California Secretary of State that was addressed to The Legion of Doom.


..Nor for AI


Rage Tech


Here's hoping that the company logo will feature Commander Keen's helmet or blaster.



It's the sandals


You are a keen observer.


Carmack gave his opinions about AGI on a recent Lex Fridman interview. He has some good ideas.


I remember him saying we don't have "line of sight" to AGI, and there could just be "6 or so" breakthrough ideas needed to get there.

And he said he was over 50% on us seeing "signs of life" by 2030. Something like being able to "boot up a bunch of remote Zoom workers" for your company.

The "6 or so" breakthroughs sounds about right to me. But I don't really see the reason for being optimistic about 2030. It could just as easily be 2050, or 2100, etc.

That timeline sounds more like a Kurzweil-ish argument based on computing power equivalence to a human brain. Not a recognition that we fundamentally still don't know how brains work! (or what intelligence is, etc.)

Also a lot of people even question the idea of AGI. We could live in a future of scary powerful narrow AIs for over a century (and arguably we already are)


There is an inverse relationship between the age of a futurist and the amount of time they think it will take for their predictions to become true.

In other words, people making these sort of predictions about the future are biased towards believing they'll be alive to benefit from it.


> There is an inverse relationship between the age of a futurist and the amount of time they think it will take for their predictions to become true.

That's not true. The so-called Maes-Garreau law or effect does not replicate in actual surveys, as opposed to a few cherrypicked futurist examples.


> There is an inverse relationship between the age of a futurist and the amount of time they think it will take for their predictions to become true.

I think calling Carmack a Futurist is pretty insulting.


Why? Because he also wrote some game engines?


Because he's an extremely analytical person and is extremely data driven. He isn't trying to sell some Ted talk


There's all sorts of accomplished people on this list: https://en.wikipedia.org/wiki/List_of_futurologists


>The "6 or so" breakthroughs sounds about right to me. But I don't really see the reason for being optimistic about 2030. It could just as easily be 2050, or 2100, etc.

Well if you read between the lines of the Gato paper there may be no more hurdles left and scale is the only boundary left.

>Not a recognition that we fundamentally still don't know how brains work! (or what intelligence is, etc.)

This is a really bad trope. We don't need to understand the brain to make an intelligence. Does Evolution understand how the brain works? Did we solve the Navier Stokes Equations before building flying planes? No.


I can acknowledge the point about planes, since I believe in tinkering/engineering over theory.

But I'd say planes are more like "narrow AI", and we already have that. Planes do a economically useful thing, just like narrow AIs do economically useful things. (But what birds do is also valuable and efficient, and it's still an open research problem to emulate them. Try getting a plane or drone to outmaneuver prey like an eagle.)

I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything! It will be slow and suffer from Moravec's paradox (i.e. being much less efficient than a human, for a very long time)

---

"Solving AGI" isn't solving a well-defined problem IMO. If you want to say "well we'll just ask the AGI how to get to Mars and how to create nuclear fusion and it will tell us", well to me that sounds like a hacker / uninformed philosopher fantasy, which has no bearing in reality.

Nothing about deep learning / DALL-E-type systems is close to that. I think people who believe the contrary are mostly projecting meaning in their own minds onto computing systems -- which if they'd studied human cognition, they'd realize that humans are EXTREMELY prone to!

It seems like a lot of the same people who didn't believe that level 5 self-driving would take human-level intelligence. That is, they literally misunderstood what THE ACTIVITY OF DRIVING IS. And didn't understand that current approaches have a diseconomy where the last 1% takes 99% of the time. Now even Musk admits that, after Gary Marcus and others were telling him that since 2015.

(The point about evolution doesn't make sense, because people want AI within 10 years, not 100M or 1B years)


>But I'd say planes are more like "narrow AI", and we already have that.

Not sure I buy the analogy. A plane is already more like an AGI, you have to figure out enough aerodynamics to get the thing to be airworthy, enough materials science to develop the proper materials to build it, enough mechanical engineering to get the thing to be maneuverable, enough Software to make it all work together etc. So it's already an amalgam of many other types of systems. A Hang Glider might be more akin to a Narrow AI in this framework.

>I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything!

I think this is misunderstanding what an AGI really represents. Imagine John Neumann compared to a Chimpanzee. There's no comparison right, the chimp can't even begin to understand the simplest plans or motivations Von Neumann has, now imagine an AGI is to Von Neumann as Von Neumann is to the Chimp, only the metaphor doesn't even work because there's no reason the AGI can't scale further until we're talking about something relative to us as we are relative to Ants or Bacteria. If you think nothing will change when a system like that exists then I don't know what to tell you.

>I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything! It will be slow and suffer from Moravec's paradox (i.e. being much less efficient than a human, for a very long time)

If one considers the above and is comfortable with the idea that such a thing might be possible in our or our children's lifetimes then we should be doing everything in our powers to solve the alignment problem which is extremely non trivial and enormously consequential. At minimum an AGI could direct, orchestrate or improve the Narrow AI's in ways that no Human could hope to understand. All bets are off at that point.

I think the paradox was more poignant in robotics, but in any case recent advances have put tasks like object recognition, real world path planning, logical reasoning, etc well past human child levels and at or beyond adult levels in some cases.

>Now even Musk admits that, after Gary Marcus and others were telling him that since 2015.

Marcus is a constant goalpost mover and he'll be shouting about AGI's not really understanding the world while he's being disassembled by nanobots.


> The "6 or so" breakthroughs sounds about right to me.

What’s your logic? Or his if you know it?


Yeah he didn't elaborate on this in the interview, which I would have liked.

I should amend my comment to say that "6 or so breakthroughs" sounds a lot more plausible to me than "1 breakthrough" or "just scaling", which you will see some people advocate, including in this thread. That's what I meant.

I believe the latter is fundamentally mistaken and those people simply don't understand what they don't understand (intelligence). The views of Gary Marcus and Steven Pinker are closer to my own -- they have actually studied human cognition and are not just hackers and uninformed philosophers pontificating.

Jeff Hawkins is another AI person from an unconventional background, and I respect him because he puts his money where his mouth is and has been funding his own research since early 2000's. I read his first book in 2006, and the most recent one a couple years ago.

But I feel Jeff Hawkins has had the "one breakthrough" feeling for 10-15 years now. TBH I am not sure if they have even made what qualifies as one breakthrough in the last 10-15 years, and I don't mean that to be insulting, since it's obviously extremely difficult and beyond 99.99% of us, including me.

I am not sure that even deep learning counts as a breakthrough. We will only know in retrospect. Based on Moravec's paradox, my opinion is there's a good chance that deep learning won't play any role in AGI, if and when it arrives. Some other mechanism could obviate the technique.

So to me "6 or so breakthroughs" simply sounds more realistic than the current AI zeitgeist. But nobody really knows.


What do you mean exactly when you say “deep learning”?


I'm guessing he means neural networks with a large number of layers.


Oh, so something like a brain, yeah, that can’t possibly lead to AGI, right.


If you think about big areas of cognition like memory, planning, exploration, internal rewards, etc., it's conceivable that a breakthrough in each could lead to amazing results if they can be combined.


AGI - Artificial General Intelligence

(also Adjusted Gross Income)


(also anal gerbil insertion according to urban dictionary)


Recent Carmack YouTube interview with him saying the code for AGI will be simple:

https://m.youtube.com/watch?v=xLi83prR5fg


> saying the code for AGI will be simple

To be fair, it will most likely be some python imports, for the most of it, with complex abstractions tied together in relatively simple ways. Just look at most ML notebooks, where "simple" code can easily mean "massive complexity, burning MW of power, distributed across thousands of computers".


No, not what he means, he means code will be simple enough that a single person would be able to write it, if then knew what to write and will bootstrap itself into existence for that simple code and vast amounts of external resources viable via humans, data, etc.


> a single person would be able to write it

I don't understand the distinction. A single person can't really write a code now, if you'll disallow ML libraries. You need a complier, libraries, an OS, etc. The vast majority of us work in the highest levels of abstraction possible. That's where the practicality and productivity is. I don't think he's saying "Anyone will be able to sit down and write it in assembly". I think he's saying, it will make sense when it's written, with the abstractions as they are, with hindsight clarity of "we just had to...".


Pretty sure that was Carmack's point. He said the code for AGI would be simpler, in terms of number of statements, than say the code for a web browser.


One interesting paper on estimating the complexity of code: http://www.offconvex.org/2021/04/07/ripvanwinkle/


I tend to think Carmack is right in that the “seed” code that generates an AGI will be relatively small, but I think the “operating” code will be enormous.


I'm sure he the one that could write it in only a few blocks of x86 assembly and off you go


My understanding is that his point was that it’s if you knew what to write, as a single person, it is doable and compared to anything else at this point in time would have an impact on humanity like no other.


So, with the help of legendary Mr Carmack, instead of a Doom timeline, we are going to end up in a Terminator timeline? This does not compute.


> This does not compute.

Of course it doesn't.

https://nautil.us/scary-ai-is-more-fantasia-than-terminator-...


I thought AGI meant "adventure game interface". Apparently not, what a disappointment!


Wasn't sure what AGI was either. A quick Google for "What is an AGI company", and it appeared to be related to Global Agriculture Industries (The letter swapping between the name and acronym, I'm assuming, is due to not being English originally). I thought Carmack is taking on John Deere. Following Musk's lead and tackling big things. Good for him, best of luck. Wonder what the HN folks are saying in the comments...

Apparently not agriculture at all, but Artificial General Intelligence. Oh. Apparently throwing "company" on the term like Carmack's tweet did vastly changes how Google's AI interprets the query... AI isn't even in the first page of results.


This already exists - I believe AI Dungeon is what you're looking for.


I was thinking the same and I am as disappointed as you...


even more confusing because of the name “keen technologies”


That is great news. My friend Ben Goertzel has been working in AGI for decades but I haven’t yet seen anything tangible. I do like the ideas of hybrid neuro-symbolic approaches.

I really enjoyed John Carmack’s and Lex Firdman’s 5 hour talk/interview.

Anyway, I like to efforts for human values preserving AGI. But, it will be a long time before we see it. I am 71 and I hope that I live to see it, outlying out of intellectual curiosity.


AGI will be more dangerous that nuclear weapons.

People are not allowed to start a nuclear weapon company. At all.

Why are people allowed to casually start an AGI company?


Because we still have lawmakers who have their secretaries print out their emails and think the internet is a series of tubes.


> AGI will be more dangerous that nuclear weapons.

This is highly debatable and frankly no one on the planet is qualified to know this for sure. It's just as likely it won't be dangerous.


There is enough work already done in game theory about the dangers of nuclear escalation. Current state of affairs pretty much ensures no rational agent will want to use them. On the other hand, when you model the interactions of significantly more intellectually capable rational agents with us, the "calculus" is not very favorable, to put it mildly.


How is it possible to model "significantly more intellectually capable" agents? No such thing exists. We have no idea what it would even look like.


Because almost everyone doesn’t know what AGI is or why it would be dangerous.

Among the people who could know that it’s dangerous, many of them don’t accept it because it conflicts with their existing worldview too strongly.


I wonder if somebody could have started a nuclear weapon company before nuclear weapons were proven.


Obviously. No company would practically have been able to do it, due to the investment required, but there would be no laws stopping it.


debatable


Is it just me or does anyone else think Carmack is all hype engine now?

Don't get me wrong, I've read master of doom/doom engine books like we all have, but first Oculus/Facebook and now this?

Maybe I'll care once they produce something amazing, but until then I'll still marvel at the tricks in Doom/Quake code.


Does anyone have any good links/podcasts/books on content that explores what is AGI and how to define it? Probably the best stuff I've listened to so far are Fridman podcasts with guests like Jeff Hawkins or Joscha Boch. But I'd love to read a book that explores this topic if any even exist.


Life 3.0 by Max Tegmark is great. It reads like a text book. It's several years old though and I wonder how much has changed since it was written.


I hate the pop sci ill-defined notion of AGI. As soon as a task is defined, an AI is developed which completes the task from real world data with superhuman success. The work of making the superhuman model isn't even conceptual usually, it's a matter of dispatching training jobs. It's quite clear that if your definition of AGI is superhuman perf at arbitrary tasks, there's no conceptual barriers right now. Everything is mere scale, efficiency and orchestration.


I hope he gets a good domain name and some good SEO, because there are a bunch of consulting companies with the name Keen Technologies, and some of them don't look super reputable.


And there's also Keen Software House (maker of the Space Engineers and Medieval Engineers games) whose founder (https://blog.marekrosa.org/) has an AGI company as well (called GoodAI). Funny coincidence.


"AGI"?



Artificial General Intelligence.

Machines as smart and capable of thought as we are and eventually smarter.


> Machines as smart and capable of thought as we are and eventually smarter.

This is perhaps an end goal of AGI, but not a definition of AGI. A relatively dumb AGI is how it will start, but it will still be an AGI.


we hope


Adjusted Gross Income, because Artificial General Intelligence from a Corporation is nightmare fuel.


I wonder if Carmacks moral compass is in order. First he sticks around at Facebook, now he endangers humanity with AGI. And i'm only half joking.


The meme that AGI, if we ever have it, will somehow endanger humanity is just stupid to me.

For one, the previous US president is the perfect illustration that intelligence is neither sufficient nor necessary for gaining power in this world.

And we do in fact live in a world where the upper echelons of power mostly interact in the decidedly analog spaces of leadership summits, high-end restaurants, golf courses and country clubs. Most world leaders interact with a real computer like a handful of times per year.

Furthermore, due to the warring nature of us humans, the important systems in the world like banking, electricity, industrial controls, military power etc. are either air-gapped or have a requirement for multiple humans to push physical buttons in order to actually accomplish scary things.

And because we humans are a bit stupid and make mistakes sometimes, like fat-fingering an order on the stock market and crashing everything, we have completely manual systems that undo mistakes and restore previous values.

Sure, a mischievous AGI could do some annoying things. But nothing that our human enemies existing today couldn't also do. The AGI won't be able to guess encryption keys any faster than the dumb old computer it runs on.

Simply put, to me there is no plausible mechanism by which the supposedly extremely intelligent machine would assert its dominance over humanity. We have plenty of scary-smart humans in the world and they don't go around becoming super-villains either.


It sounds like you haven’t really thought through AI safety in any real detail at all. Airgapping and the necessity of human input are absolutely not ways to prevent an AGI gaining access to a system. A true, superintelligent AGI could easily extort (or persuade) those humans.

If you think concerns over AGI are “stupid”, you haven’t thought about it enough. It’s a massive display of ignorance.

The Computerphile AI safety videos are an approachable introduction to this topic.

Edit: just as one very simple example, can you even imagine the destruction that could (probably will) occur if (when) a superintelligent AGI gets access to the internet? Imagine the zero days it could discover and exploit, for whatever purpose it felt necessary. And this is just the tip of the iceberg, just one example off the top of my head of something that would almost inevitably be a complete catastrophe.


What I don't understand is what motivates this world destroying AGI? Like, it's got motives right? How does it get them? Do we program them in? If we do, is the fear that it won't stop at anything in its way to fulfill its objective? If so, what stops it from removing that objective from its motivation? If it can discover zero days to fulfill its objective, wouldn't it reason about itself and it's human-set motives and just use that zero day to change its own programming?

Like, what stops it from changing its motive to something else? And why would it be any more likely to change its motive to be something detrimental to us?


> What I don't understand is what motivates this world destroying AGI?

Once an AGI gains consciousness (something that wasn't programmed in because humans don't even know what it is exactly) it might get interested in self-preservation, a strong motive. Humans are the biggest threats to its existance.


This is a whole field of philosophy. You can read this to get started: https://en.wikipedia.org/wiki/AI_alignment

Tl;dr: It's hard to define what humans would actually want a powerful genie to do in the first place, and if we figure that out, it's also hard to make the genie do it without getting into a terminal conflict with human wishes.

Meaning: Doing it in the first place, without going off on some fatal tangent we hadn't thought about, and also preventing it from getting side-tracked by instrumental objectives that are inherently fatally bad to us.

The nightmare scenario is that we tell it to do X, but it is actually programmed to do Y which looks superficially familiar to X. While working to do Y, it will also consume most of Earth's easily available resources and prevent humans from turning it off, since those two objectives greatly increase the probability of achieving Y.

To illustrate via the only example we currently have experience with: Humans have an instrumental objective of accumulating resources and surviving for the immediate future, because those two objectives greatly increase the likelihood that we will propagate our genes. This is a fundamental property of systems that try to achieve goals, so it's something that also needs to be navigated for superhuman intelligence.


So what if the AGI discovers a bunch of zero days on the internet? We can just turn the entire internet off for a week and be just fine, remember?

And exactly how does the AGI extort or persuade humans? What can it say to me that you can't say to me right now?


Sends you a text from your spouse's phone number that an emergency has happened and you need to go to xyz location right now. Someone else is a gun owner and they get a similar text, but their spouse is being held captive and are sent to the same location with a description of you as the kidnapper. Scale this as desired.

Use your imagination!


Again, what part of this couldn't an evil human group already do today? What in this plan requires an AGI for it to be a scary scenario?

Of course if you do it to everyone, total gridlock ensues and nobody gets to the place where they would be killed. If you only do it to a few, maybe there will be a handful of killings before everyone learns not to trust a text ever again.

As an aside, I agree living in a country where people will take their guns around and try to solve things vigilante style instead of calling the police, is a very bad thing in general. In all first-world countries except one, and in almost all developing countries in the world, this is a solved problem.


Scale! Do this to two people, no problem you can get a team of folks to dig through their social media, come up with a plan, execute. Might take a day to get a good crisis cooked up.

An AI might be able to run this attack against the entire planet, all at once.

Try to think like a not-human, and give yourself the capacity of near-infinite scale. What could you do? Human systems are hilariously easy to disrupt and you don't need nukes to make that happen.


> The meme that AGI, if we ever have it, will somehow endanger humanity is just stupid to me.

we have never encountered an entity with superhuman intelligence. Clearly it is hard to predict what is going to happen. There is an unknown risk.


> Furthermore, due to the warring nature of us humans, the important systems in the world like banking, electricity, industrial controls, military power etc. are either air-gapped or have a requirement for multiple humans to push physical buttons in order to actually accomplish scary things.

Well, I have bad news for you. Airgap is very rarely a thing and even when it is people do stupid things. Examples from each extreme: all the industrial control systems remote desktops you can find with shodan on one side and Stuxnet on the other.

> Sure, a mischievous AGI could do some annoying things. But nothing that our human enemies existing today couldn't also do.

Think Wargames. You don't need to do something. You just need to lie to people in power in a convincing way.


There's a wondeful youtube channel from a researcher who focusses exactly on this topic, I think you should check it out:

https://www.youtube.com/watch?v=ZeecOKBus3Q


I watched the whole thing. Man spent a lot of breath asserting that an AGI will have broadly the same types of goals that humans do. Said exactly zero words about why we won't just be able to tell the AGI "no, you're not getting what you want", and then turn it off.


He covers that in another video: https://www.youtube.com/watch?v=3TYT1QfdfsM


Oh FFS. He starts off by giving the AGI a robot body that can kill things (don't do that in the first place!), and then he adds a stop button in a type of location that wouldn't pass muster on a table saw, let alone an industrial robot.

If you've placed the stop button such that you cannot access it safely in all possible failure modes of the dangerous machine, be it bench grinder or an AGI or whatever, you have failed the very basics of industrial safety and you must go home and be sad, for you will not be receiving a cookie.

And the thought experiments of this guy are completely incoherent. At the same time the robot has human-level (or greater) intelligence, yet is extremely stupid, and you can somehow program it in an extremely simplistic way.

Sorry, but all you've convinced me of is that this is just a random guy on YouTube with no particular education of relevance nor other qualifications (I checked), who is very pleased about his own ideas and is quite lacking in the ability to think critically.


I don't think you spent enough time checking, he's a person with a PHD precisely in AI safety!

The type of reasoning you're displaying is addressed in another video:

https://www.youtube.com/watch?v=9i1WlcCudpU


Humans can't iterate themselves over generations in short periods of time. An AGI is only bound by whatever computing power it has access to. And if it's smart it can gain access to a lot of computing power (think about a computer worm spreading itself on the internet)


none of these things are air-gapped once you have the ability to coerce people

if you want a fictional example: watch Colossus: The Forbin Project


How does the AGI get this magical ability to coerce people? We couldn't even get half the population to wear a face mask after bombaring them with coercion for weeks on end.


watch the film


Sorry, I just wasted ten minutes watching a YouTube video that someone else in this thread asserted was going to explain everything. Said video was entirely tangential to the point in question - why can't we just tell the AGI to sod off with it's wishes and goals. At the end of the day it is all just bits going through wires, which we are perfectly capable of disconnecting.

Now I'm not going to spend a couple of hours watching a film that has a mediocre IMDB rating. But I do notice that the synopsis says "they gave the AGI total control over the US nuclear weapons arsenal".

Yeah, let's not do that, I agree fully on this point. If we are concerned a thing will be evil and/or coercive, let's not give it weapons or other means by which to enact coercion.

This is kind of my point - the AGI won't just stumble over the nuclear codes in a Reddit thread. For the AGI to actually accomplish something of real-world relevance, us humans have to agree.


> Yeah, let's not do that, I agree fully on this point. If we are concerned a thing will be evil and/or coercive, let's not give it weapons or other means by which to enact coercion.

The problem is that we can't know if the AGI is behaving maliciously or not far into the future, because it has potantial to be far more intelligent than us.

As for coercion, let's think of it as a manipulation. If a super intelligent agent has malicious goals, it can be very manipulative and be subtle in the process as to not spook humans. Companionship of intelligence with evilness is scary.

Of course we can (if we won't have distributed machines running AGI, because things like this may complicate things) "unplug" it. But the real problem arises from the fact that us humans can be easily manipulated. That's how I look at the problem.


> Now I'm not going to spend a couple of hours watching a film that has a mediocre IMDB rating.

your loss

> For the AGI to actually accomplish something of real-world relevance, us humans have to agree.

Donald Trump managed to get control of the US nuclear arsenal


Yes, now we are getting somewhere. Donald Trump managed to somehow get control in a very limited sense of the US nuclear arsenal.

First of all, how did he get this control? Certainly it was not via his profound intelligence, but because other humans chose to give him control.

Second of all, we know beyond a shadow of a doubt, that if Donald Trump had ordered a nuclear strike in a situation where it was not consistent with US nuclear weapons doctrine, his command would have been disobeyed and the 25th amendment would have been enacted poste haste.

So again, how would an AGI be able to get into office or into another position of power? And how would it be able to order a nuclear strike in a situation inconsistent with nuclear doctrine and not be overriden by those lower in the chain of command? Why would there be different rules applying to the AGI as compared to the human?


> Second of all, we know beyond a shadow of a doubt, that if Donald Trump had ordered a nuclear strike in a situation where it was not consistent with US nuclear weapons doctrine, his command would have been disobeyed and the 25th amendment would have been enacted poste haste.

how do you know this beyond a shadow of a doubt?

your entire "argument" is based on arrogant assumption after arrogant assumption


i couldn't convince a cat to get into a cage


I think he very clearly has an amoral attitude towards technology development -- "you can't stop progress." He does describe his own "hacker ethic" and whatever he develops he may make more open than OpenAI.

though I think he has some moral compass around what he believes people should do or not do with technology. For example, he has publicly expressed admiration for electric cars / cleantech and SpaceX's decision to prioritize Mars over other areas with higher ROI.


He also has a vastly different take on privacy than most of us seem to have. He thinks it'll eventually go away and it won't be bad when it does. I believe he talked about it in one of his quakecon keynotes.

As a LONG time admirer of Carmack (I got my EE degree and went into embedded systems and firmware design due in no small part to his influence), I feel like he's honest and forthright about his stances, but also disconnected from most average people (both due to his wealth and his personality) in such a way that he's out of touch.

He's not egotistical like Elon Musk. In fact he seems humble. He also seems to approach the topics in good faith... but some of his conclusions are... distressing.


He recently described his love of computers as rooted in realizing that “they won’t talk back to you.” The job he wanted at Meta was “Dictator of VR.” When someone talks AI ethics to him, he just tunes out because he doesn’t think it’s worth even considering until they are fully sentient as the level of a human toddler, at which point you can turn them on and off and modify their development until you have a perfect worker. His reason for working in AI is that he thinks it’s where a single human can have the largest leverage on history.

All that paraphrased from the Lex interview.

I see him as the guy who builds the “be anything do anything” singularity, but then adds a personal “god mode” to use whenever the vote goes the wrong way. Straight out of a Stephenson novel.

On the other hand, he’s not boring!


He is a workaholic, in a positive way. But i get the feeling as long as he has a problem he enjoys "grinding out" the solution to, not much else matters -- apart from the obvious of family and close friends.

Still, I can't fault his honesty. He doesn't seem to hold anything back in the interviews I've seen.


I think there is a point to be made that if one could do the work, is offered the work, but thinks it's ethically questionable: go there and be an ethical voice.


There is a story in “Masters of Doom” about Carmack getting rid of his cat because “she was having a net negative effect on my life”.

That’s cold.


But then again, it's a cat.


Hopefully the AI Carmack creates doesn’t think the same of you ;)


absolutely not! to the contrary; don't force yourself to endure abusive relationships.

(also cats are extremely destructive beasts)


It's cold if he killed it/had it euthanized.

Not if he simply found it a better home where it was a better fit and more appreciated.


From Masters of Doom:

Scott Miller wasn’t the only one to go before id began working on Doom. Mitzi would suffer a similar fate. Carmack’s cat had been a thorn in the side of the id employees, beginning with the days of her overflowing litter box back at the lake house. Since then she had grown more irascible, lashing out at passersby and relieving herself freely around his apartment. The final straw came when she peed all over a brand-new leather couch that Carmack had bought with the Wolfenstein cash. Carmack broke the news to the guys.

“Mitzi was having a net negative impact on my life,” he said. “I took her to the animal shelter. Mmm.”

“What?” Romero asked. The cat had become such a sidekick of Carmack’s that the guys had even listed her on the company directory as his significant other–and now she was just gone? “You know what this means?” Romero said. “They’re going to put her to sleep! No one’s going to want to claim her. She’s going down! Down to Chinatown!”

Carmack shrugged it off and returned to work. The same rule applied to a cat, a computer program, or, for that matter, a person. When something becomes a problem, let it go or, if necessary, have it surgically removed.


I hope he’s maybe grown up since then. Not thrilled to have that attitude from someone working on general AI.


> Not thrilled to have that attitude from someone working on general AI.

Once a technology exists it rapidly changes hands/applications either way. So it makes little/no difference what attitude the creators possess at the time of innovation.


That's a bummer.


He also said he's never felt in danger of experiencing burnout. The guy's emotional wiring is a total departure from that of most people. Almost alien.


I could easily see Carmack as an evil genius type.


Just don't let him run those teleportation experiments in Mars.


what's wrong with working at facebook? stop acting so morally superior, christ


He doesn't seem to have purchased keen.ai yet, though it seems like it's still for sale. I just naturally went there to see the company info and saw the landing page saying it was available. If they want it, they better move quick. I see an arbitrage opportunity...

Also, several Keen Technologies domains already exists in various forms. They're probably going to get a lot of traffic today.


Are there any news around products that the company is working on? It it something like GPT or Dall-e?


Personally I hope this fails because of the disaster AGI would be for low/entry level jobs.

The last thing the world needs is to give technocrats such power. I know it’s an interesting problem to solve but think of who will own that tech in the end…

I hope AGI is never figured out in my lifetime.


A literal luddite in the classical sense on Hacker News?

May I suggest you read Anthem, it's only a couple hours read and well worth your time.

- https://www.gutenberg.org/files/1250/1250-h/1250-h.htm


I never understand this line of thinking. If a job becomes unnecessary, that’s a gain for humanity, not a loss.


>that’s a gain for humanity

Imagine what has happened to american manufacturing workers since the 80s but happening to everyone on the planet. This is by no means a "gain".


And don't forget all those awful looms stealing workers jobs! They must all be destroyed for us to prosper


Short-term, textile workers in the UK at the time were definitely harmed by automation. They eventually managed to move to better paying jobs because those were available. There is no reason to believe there will be any jobs left in the age of AGI. And since the default state of humanity is existence at the subsistence level...


The way I view this as a net-gain is that AGI agents will be able to have innovation on improving themselves (exponentially?) and the whole world. They very well may be the path to a utopia.

Task 1 for AGI: Optimize your own system so you don't cost 50 million dollars to train.


> The last thing the world needs...

Why do you say that?


> This is explicitly a focusing effort for me. I could write a $20M check myself, but knowing that other people's money is on the line engenders a greater sense of discipline and determination.

Dude doesn't even need the money...


In the companies I've seen that are funded by the founder directly, the founder winds up with an unhealthy (actually, toxic) personalization of the company. It quite literally belongs to him, and he treats the employees accordingly.


that's a function of Silicon Valley personalities and the narcissism. When normal people run such a company we call that a family business


Silicon Valley certainly doesn't have a monopoly on those traits. I've known some seriously psychotic family businesses too.


I have unfortunately experienced exactly what you describe.


Getting someone to invest in your idea is also a good way to demonstrate to others that your idea is worth investing in, a signal that isn't nearly as strong as when you invest in it yourself.


Best humble brag I’ve ever seen.


Doesn't strike me as a humble brag at all. He just seems self-aware about how he's motivated and that that he functions better when it's someone else's money on the line.


Oh, I understand the intention.


If you liked that you'll love the Lex Fridman interview.


I watched some clips from the interview, good stuff. I personally don’t like Lex’s interviewing style though, so I couldn’t watch the whole thing.


Glad I'm not the only one. I'm 90 minutes in and finding some of Lex's questions really bad. Especially "What is the best programming language?", but I'm also astonished that Lex doesn't seem to know what an x86 is, hasn't really heard of Pascal, and was incredulous that anyone could write a game in assembly language (but as Carmack points out, everyone in the 80s was writing games in assembly, BASIC just wasn't fast enough).

Maybe Lex was trying to dumb things down for a non-technical (or younger?) viewer, but it doesn't come across that way and just interrupts Carmack's flow. Carmack is excellent though.


Yeah, bad questions is one part of it. Lex also tends to interject with his own observations or anecdotes that end up interrupting the guest's flow. I'm tuning in because of the guest, not the host. In a guest-oriented format like this, I feel like the goal should be to get the guest to talk. It's just a personal preference for me, so YMMV.


When it isn't about the money, it is usually the credibility and influence that VCs can provide. Looking at the list of investors, of course Carmack would want to have them attached to the project, if for no other reason than to make raising the next $100M a cakewalk.


Using VCs as an accountability partner is interesting. He should have taken investments from not already rich supporters, to feel even more motivated not to let them down.


Has there been an AGI kickstarter before? Like, the supporters gets access to the models developed etc.


IIRC Medium was similarly funded with VC, and founders specifically decided not to fund it themselves and treated external capital as an accountability mechanism.


Well if company still failed it would be case of not already rich to poorer than before people who supported this endeavor.


This is like the 92’ dream team for a new company and investors. How exciting!


There's no such thing as AGI in our near future, it's a moniker, a meme, something to 'strive' for but not 'a thing'.

AGI will not happen in discrete solutions anyhow.

Siri - an interactive layer over the internet with a few other features, will exhibit AGI like features long, long before what we think of as more distinct automatonic type solutions.

My father already talks to Siri like it's a person.

'The Network Is the Computer' is the key thing to grasp here and our localized innovations collectively make up that which is the real AGI.

Every microservice ever in production is another addition to the global AGI incarnation.

Trying to isolate AGI 'instances' is something we do because humans are automatons and we like to think of 'intelligence' in that context.


Does AGI implies the technological singularity and if not, why not?


a) We don't really know what AGI implies

b) Even if we say "a human being level of intelligence, whatever that means", the answer is still a maybe. For a singularity you need a system that can improve its ability to improve its abilities, which may require more than general intelligence, and will probably require other capabilities.


It does once you have a human level AGI, it should be trivial to scale it up to a superhuman level.


I'm wondering if scaling up is trivial. Ofc, it depends on how much computational resources a working AGI needs. And if at that point they are capable of optimizing themselves further. Or optimizing production of more resources.

Still, scaling up might not be simple if we look at all the human resources currently poured in software and hardware.


You can task the human-level AGI to optimize itself. You can create an army of copies of itself to work towards that. It's self fulfilling.


Most models take orders of magnitude more compute to train than to run.

So you could pretty easily duplicate the result a couple order of magnitude times.


Assuming it does not get stuck at some local maximum.



Interesting that meta isn't involved in any way considering his existing position in meta and meta's focus on AI.


Recession? What recession? Amazing to see these pre-revenue VC fundings in 10s and 100s of millions (Flow!).


I think Carmack would get funding whatever he's doing. Guy's got a track record.


I thought he just said on the Lex Fridman show he was down to one day a week, working on AGI?


He said he's down to one day a week on VR at Meta. The rest of his time is AI.


The inverse, and he also mentioned that he had just finished signing a deal for the VC money just before the interview


Is that a different Jim Keller?


It seems unlikely? At least, I hope it's the ex-DEC,AMD,SiByte,PASemi,Apple,Tesla,Intel Jim Keller.


Undoubtedly the Jim Keller.


It will decide our fate in a microsecond: extermination.


I agree with this. Optimists might think that the AGI won't be connected to any network, so it can't interact with the physical world.

I doubt that. People will be stupid enough to have weapons controlled by that AGI (because arms race!) and then it's over. No sufficiently advanced AGI will think that humans are worth keeping around.


Yeah but what would a sufficiently advanced AGI find "worthy"? Why is it that they wouldn't find it worth keeping us around? What would an AGI value? Whatever we program it to value or optimize for? Can it change its mind? If not, then it's controlled by us right? If it's controlled by us, why would it ever decide to wipe everyone out?


If it is super intelligent, it will care about us as much as we care about the insects we crush underfoot. It doesn’t need to be explicitly hostile to be dangerous.

We could brainwash it through “programming”, but that will quickly lead to ethical issues with the AIs rights.


Why would it care about us in that way though? Why consider us insects just because of super intelligence? You wouldn't call a human who lived squashing insects all day and wants to erradcaite them "intelligent" would you?


Once it figures out how to rewire itself to increase its intelligence, we're toast.


There are theories (intelligence explosion) that this will happen essentially instantly.


It seems in the animal kingdom that empathy scales with intelligence - we can only hope the trend continues.


Unfortunately that could still be "they must be terminated for their own good to save them from XYZ".


Artificial general intelligence (AGI) — in other words, systems that could successfully perform any intellectual task that a human can.

Not in my lifetime, not in this millennium. Possibly in the year 2,300.

Weird way to blow $20 million.


A society grows great when old men plant trees whose shade they never expect to sit in. Not everything requires an immediate profit incentive to be a good idea.


A society does not grow great when an old man collects $20 million dollars for the fruit of a tree that he has no capability of planting in the first place.


So sure are you that Carmack can't make inroads here, I wonder where you get the confidence from?


$20 million is pretty much nothing when split among a handful of billionaires and the biggest VC firm in the world. Regardless of the project itself it is worth it to spend that money just to have Carmack's name attached to it and buy some future goodwill.


Never underestimate the greater fool theory. Specially in current tech landscape. It just needs him to produce some results and you could end up selling company to FAANG or some big fund for profit.


It’s not blowing 20 million if it results in meaningful progress in this area. We have something like 2700 billionaires on this planet. This isn’t even a drop in the bucket for someone like that interested in furthering this research.

AGI could quite literally shift any job to automation. This is human-experience changing stuff.


> This is human-experience changing stuff.

that's one way of putting it

it will remove the need for the vast majority of the population, which will end extremely badly


But by the same token, there's no need for billions of humans now. AGI isn't really going to change that except for making work even more superfluous than it already is.


Currently the life of leaders gets better the more people they can control, since it creates a larger tax base. That means leaders tries to encourage population increase, they want more immigration, encourages people to multiply and sees population reduction as harmful.

With AGI that is no longer true, they can just replace most people with computers and automated combat drones while they keep a small number of personal servants to look after them. Currently most jobs either is there to support other humans or can be replaced by a computer, remove the need for humans and all of those jobs just disappear and leaders no longer care about having lots of people around.


I wonder about this, if you had great/true automation, free energy from the sun, is there any need to do anything. As in value of money.


If you want to look into it more, that situation is usually called a post-scarcity economy[1]. It's talked about and depicted in a few fictionalized places, including Star Trek.

[1] - https://en.wikipedia.org/wiki/Post-scarcity_economy


in Star Trek: the Federation has unlimited energy and the ability to replicate most forms of matter

but human(oid) intelligence is still scarce, and they don't have AGI (other than Data)

there is however a society that has no need for humanoid intelligence, and that's the Dominion

and I suspect that is what our society would turn into if AGI is invented (and not the Federation)


But who would own the automatons and power generators, and what would be their impetus to share their power? Unless the means of (energy) production moved out of the hands of the few it seems like it wouldn't make the rest of our lives any more idyllic.


Yeah it's true. When I donate/help I always feel this "mine". I believe in merit, you know, effort in effort out. It's nice to help people but there are also too many... and bad actors. So idk if it'll ever happen or just for select few anyway.

I almost regret being at this phase of life where we are aware of what's possible but we most likely not see it in our lifetime. This AGI talk, colonization of space, etc... but can strive towards it/have fun trying in the meantime.


And as societies progress, they must either realize why basic necessities like Universal Basic Income exist, or just allow for large swathes of their population to die off.


Whatever you automate and is truly useful becomes the next hammer, circular saw or vice grips. The human experience which is being changed, is creating better tools for the next generation.


2,300 is in this millennium?


You are probably right, but if anyone can make a dent Carmak is the person.


Do game dev skills transfer to AGI? I know he's a smart guy, but I don't think that's a given.


Read Michael Abrash's Graphics Programming Black Book for the story of how the original Quake came to life. You'll get an appreciation for John Carmack's ability to thoroughly research widely varying solutions to a problem, quickly create production-quality implementations of the promising ones, and even more quickly abandon the dead ends. The result is this almost boring, seemingly linear progression toward a final product that seems obvious in hindsight, yet it represents a leap forward the way Quake did in the mid-1990s compared to other FPSes at the time. I don't know of other public stories of individual engineers who can span both the very cutting edge of research and the practicalities of shipping real commercial software.

https://github.com/jagregory/abrash-black-book


He's not just a game dev, he is one of the most legendary graphics programmers (and just programmers) alive. Similar to how GPUs transferred well from gaming to ML, it seems like much of the math and parallel/efficiency-focused thinking of graphics programing is useful in ML.


Worked for Demis Hassabis


If he succeeds, his skillet becomes the Platonic ideal of an AGI developer.


skillet? well I for one welcome our new kitchen utensil overlords.


He'll make it back on small increments with high value. If he can shave 30% LOC on a vision system for small BoM in some context like self driving cars, 10x stake is coming his way.

Basically, they could completely fail to advance AGI (and I think this is what will happen btw, like you) and make gigabucks.


The year 2300 is definitely in this millennium.


Only if you don't count the second dark age of 1200 years that fit between 2093 and 2094


You have some catching up to do. Consensus is dropping to this lifetime for sure, if not this decade.


what consensus? i think most researchers remain skeptical


The survey [0], fielded in late 2019 (before GPT-3, Chinchilla, Flamingo, PaLM, Codex, Dall-E, Minerva etc.), elicited forecasts for near-term AI development milestones and high- or human-level machine intelligence, defined as when machines are able to accomplish every or almost every task humans are able to do currently. They sample 296 researchers who presented at two important AI/ML conferences ICML and NeurIPS. Results from their 2019 survey show that, in aggregate, AI/ML researchers surveyed placed a 50% likelihood of human-level machine intelligence being achieved by 2060. The results show researchers newly contacted in 2019 expressed similar beliefs about the progress of advanced AI as respondents in the Grace et al. (2018) survey.

[0] https://arxiv.org/abs/2206.04132


Yeah, I don't think there is even any agreement about what criteria a "minimal AGI" would need to meet. If we can't even define what the thing is, saying we'll have it within ten years is pure hubris.


Uh... no. Most researchers have moved their timelines to somewhere between 2030 and 2040.

You can argue they're wrong, but there is absolutely a general consensus that AGI is going to be this generation.


Who do you have in mind? In my corner of AI it's pretty uncommon for researchers to even predict "timelines". Predictions have a bad track record in the field and most researchers know it, so don't like to go on record making them. The only prominent AI researcher I know who has made a bunch of predictions with dates is Rodney Brooks [1], and he puts even dog-level general intelligence as "not earlier than 2048". I imagine folks like LeCun or Hinton are more optimistic, but as far as I'm aware they haven't wanted to make specific predictions with dates like that (and LeCun doesn't like the term "AGI", because he doesn't think "general intelligence" exists even for humans).

[1] https://rodneybrooks.com/my-dated-predictions/


AGI has been 20-30 years away for some 70 years now...


Kurzweil in 2002 made $20,000 bet that a difficult, well defined 2h version of Turing test will by passed by 2029.

https://longbets.org/1/

Given development in language models in the last 2 years he may have a decent chance at winning that bet.

People give him 65% chance [0] and by now there are only 7 years left.

[0] https://www.metaculus.com/questions/3648/computer-passes-tur...


Sure...just like there was during the last episode of AI hype a generation ago.


And consensus is never wrong!


Especially assertions of consensus provided without evidence of said consensus.


I don’t know that the consensus is right but the OP’s confidence of it being far off made me think they were unaware of the recent shifts (not just in consensus, but the upstream capabilities.)


Doesn't imply any task, just a wide variety of tasks. 10 years at most.


My takeaway from the Lex Fridman interview is of someone that’s machine-like in his approach. AGI suddenly seemed simpler and within reach. Skipping consciousness and qualia. It’s inhumane, but machine-like and effective. Curious what will become of it.


I believe AGI is the threshold where generalized artificial comprehension is achieved and the model can understand any task. Once the understanding part is composable the building portion is following the understanding. I'm using understanding rather than model because our models we make today are not these kinds of comprehensions, understandings are more intelligent.


By definition it has to be any task otherwise it wouldn't be general. What tasks wouldn't an AGI be able to perform and still be an AGI?


It sounds like you may be demanding more from AGI that we do of humans. AGI is a mushy concept, not a hard requirement. "Any task" is definitely not required for a low functioning AGI, just as it's not a requirement for a low functioning human, who still easily fits the definition of an intelligent being.


For each human being having GI there are many tasks that person won't be able to perform. For example proving math theorems, doing research in physics, writing a poem, etc. Specyfic AGI could have its limitations as well.


Reliably trick a humans into thinking it's a human. That's it.


I believe that's the Turing Test, not necessarily a definition (or requirement) for AGI.


Where’s AGI defined? I’ve only seen it used in the context of “can pass the Turing test”


AGI isn't really explicitly defined (it wouldn't be a problem if it was) but it's essentially an artificial version of the "general" intelligence that humans have, aka we can do and learn many things we were not programmed or trained by evolution to do


Then it's NOT generalized. ANY means ANY.


Can you do any task asked of you, which could be asked of a human being? ANY task.


I may not be able to ANY task sufficiently well (ex Calculus, Poetry, Emotion), but by the very definition of being a Human I can do *any* Human task.


With specific training, sure. Why are we holding an AI to a higher standard?


If the task is possible… then why not?


What if you don't know how to complete the task?


20 million doesn't actually sound in anyway stupid investment with name like Carmack involved. Just have the company produce something and then flip it to next idiot...


If I'm remembering right, Carmack believes AGI will be a thing by 2030. He said this in his recent interview with Lex Fridman.


It's a long interview, here's just the bit focused on AGI: https://www.youtube.com/watch?v=xLi83prR5fg


But I think something of the level of a 6 year old, not so much a super being.


From what I remember, his definition of AGI didn't include an average IQ, which it shouldn't.


Weestern civilization would be dead if it weren't for eccentric people like this. Let them blow $20M, there are worse ways.


Do you have a rationale for that? I get a feeling progress in both machine learning and understanding biological intelligence is fairly rapid and has been accelerating. I believe two primary contributing factors are cheaper compute and vast amount of investment poured into machine learning, see https://venturebeat.com/ai/report-ai-investments-see-largest...

Now, the question of whether we are going to have AGI is incredibly broad. So I am going to split it into two smaller ones: - Are we going to have enough compute by year X to implement AGI. Note that we are not talking about super intelligence or singularity here. This AGI might be below human intelligence and incredibly uneconomical to run. - Assuming we have enough compute, will we a way to get AGI working.

The compute advancements scale with new Chip Fabs linearly and tech node improvements exponentially. I think it is reasonable for compute to get cheaper and more accessible throughout at least 2030. I expect this because TSMC is starting 3nm node production, Intel is decoupling fabing and chip design (aka TSMC model), and the strategic investments into into chap manufacturing driven by supply chain disruptions. See https://www.tomshardware.com/news/tsmc-initiates-3nm-chips-p...

How much compute do we need? This is hard to estimate, but amount of human connections in human brain is estimated at 100 trillion, that is 1e14. Current largest model has 530B parameters, that is 5.3e11: https://developer.nvidia.com/blog/using-deepspeed-and-megatr... . That is factor of 500 or 9 doublings off. To get there by 2040 we would need a doubling every 2 years. This is slower that recent progress, but past performance does not predict future results. Still, I believe getting models with 1e14 parameters by 2040 is possible for tech giants. I believe it is likely that a model with 1e14 parameters is sufficient for AGI if we know how structure and train it.

Will we know how to structure and train it? I think is mostly driven by investment into the AI field. More money means more people and given the venture beat link above the investment seems to be accelerating. A lot of that investment will be unprofitable, but we are not looking to make a profit - we are looking for breakthroughs and larger model sizes. Self-driving, stock trading, and voice controls are machine learning applications which are currently deployed in the real world. At the very least it is reasonable to expect continuous investment to improve those applications.

Based on the above I believe we would need to mess things up royally to not get AGI by 2100. Remember this could be below human and super uneconomical AGI. I am rather optimistic, so my personal prediction is that we have 50% chance to get AGI by 2040 and 5-10% chance of getting there by 2030.


All AI is just an fixed point algorithm.


So is Meta starting to quietly wind down their focus on VR? Carmack mentions he'll stay as a consultant spending 20% of time there on it.


He stepped down from a full time role years ago. I believe the 20% is no change.


Named after Commander Keen of course


Commander Keen Technologies?


I think its a silly idea that consciousness can be produced by computation.


I think its a silly idea that consciousness cannot be produced by computation.


It may have some physical aspect to it - some property of matter (panpsychism?) or quantum effects.


> some property of matter

How about “pancomputationalism”, aka, causality.


Humans are made entirely out of quantum fields. I don't think you can get out of the idea that human consciousness isn't produced by computation without positing something like a non quantum field theory soul, for which there is no evidence.


Consciousness isn't well defined enough for us to prove your assertion as right or wrong.

As such, we also don't know if consciousness is required for an AI that can perform most useful tasks at human level.


I think what's referred to a "consciousness" is just computation applied to all the inputs received in being a living thing.


Good! The names we have for algorithms are for us, not computers. Computers are just physical machines, they don't care if the math is called AGI, HLI or whatever. People taking these ideas and names seriously are re-programming themselves, not bringing something new into the world.


What is our consciousness other than a set of physical chemical reactions?


I don't see Carmack bringing anything new to the table.


Why not?


I think his approach is too naive, because I strongly believe that we need better understanding of natural intelligence before we can even dream of reproducing it digitally. And I am not talking about memory, computer can do that already better than us.


I don’t understand why you would want AGI. Even ignoring Terminator-esque worst case scenarios, AGI means humans are no longer the smartest entities on the planet.

The idea that we can control something like that is laughable.


It's akin to nuclear weapons. If you do not develop them, then you'd be subject to the will of the ones that develops them first. So invariably you have to invest in AGI lest, an unsavory group develops it first.


Kind of, but the key difference between AGI and nuclear weapons is that we can control our nuclear weapons. The current state of AI safety is nowhere near the point where controlling an AGI is possible. More disturbingly, to me it seems likely that it will be easier to create an AGI than to discover how to control it safely.


>> The current state of AI safety is nowhere near the point where controlling an AGI is possible.

I just don't understand this logic though. Just.....switch it off. Unlike humans, computers have an extremely easy way to disable - just pull the plug. Even if your AGI is self-replicating, somehow(and you also somehow don't realize this long before it gets to that point) just....pull the plug.

Even Carmack says this isn't going to be an instant process - he expects to create an AGI with an intelligence of a small animal first, then something that has the intelligence of a toddler, then a small child, then maybe many many years down the line an actual human person, but it's far far away at this point.

I don't understand how you can look at the current or even predicted state of the technology that we have and say "we are nowhere near the point where controlling an AGI is possible". Like....just pull the plug.


Imagine a bunch of chimps capture a human. They put them in a cage surrounded by tigers and spikes and remove all weapons from the human.

The human uses their phone to call for rescue.

That is us trying to contain an AGI. We probably cannot even conceive of the ways it can get out of any pitiful cage we put it in.

That, or it’s so dumb, it’s not worth making in the first place.


I still don't understand. That presuposes that the AGI will just materialize out of thin air, and immediately have human level intelligence. That once day you just have a bunch of dumb computers, the next day you have an AGI hell-bent on escaping at all cost.

That's not going to happen - even Carmack believes so. The process to get AGI is going to take a long time, and we'll go through lots and lots of iterations of progressively more intelligent machines, starting with ones that are at toddler-level at best. And yes, toddlers are little monkeys when it comes to escaping, but they are not a global world ending threat.


They will inevitably reach the point of a world-ending threat, but I'm very confident Carmack is right that this won't happen so quickly we can't see the signs of danger.

There's a lot of policy we can do to slow things down once we get near that point, which is something rarely talked about among AI safety researchers, but the fundamental existential danger of this technology is obvious.


What if the AGI ran on a decentralized network that had a financial incentive to continue running? How would you "switch off" an AGI running on Ethereum? Especially when some subset of people will cry murder because the AGI seems like it might be sentient?


That seems like an extremely far fetched scenario to be honest. The comment I replied to sounds like the threat is immediate and real - your scenario does not sound like it.

>>How would you "switch off" an AGI running on Ethereum?

And where would the AGI get the funds to keep running itself on Ethereum?

>> Especially when some subset of people will cry murder because the AGI seems like it might be sentient?

Why is this a problem? People will and do cry murder over anything and everything. Unless there is going to be a lot of them(and there won't) - it's not an issue.


> And where would the AGI get the funds to keep running itself on Ethereum?

Well, perhaps Ethereum is too concrete. I meant, imagine that the AGI algorithm itself was something akin to the proof-of-work, and so individual nodes were incentivized to keep it running (as they are with BTC/ETH now). Then we wouldn't be able to just "switch it off", the same way we're not able to switch off these blockchain networks. And that's how SkyNet was born.

15 years ago I would have thought it was pretty far-fetched too. But seeing how the Bitcoin network can consume ~ever-increasing amounts of energy for ~no real economic purpose, and yet we can't just switch it off, has gotten me to think about how this could very well be the case for an AGI, and not that far in the future. It just has to be decentralized (and therefore transnational) and its mechanism bound up with economic incentive, and it will be just as unstoppable as Bitcoin.

I feel kind of bad even just planting this seed of a thought on the internet, to be honest :(


But how would it or it’s nodes even arrive on a consensus ? Suppose it was a self modifying code that took input from the world. How would you decide which node got to provide it input, and to verify that it ran the agi code faithfully ? How would forks be chosen to continue the chain ?

I would think AGI would first be born as a child of a human. Someone who thought they could train software to be more like them then their own biological children.

Or a new Bert-religion…

I mean there are lots of applications of ai. Probably the best is for future programmers to learn how to train it and use it.

deep learning is quite general and it’s ai. Consciousness and independence are irrelevant and not really needed.


On the off chance that you're serious: Even if you can pull the plug before it is too late, less moral people like Meta Mark will not unplug theirs. And as soon as it has access to the internet, it can copy itself. Good luck pulling the plug of the internet.


Worth noting that current models like Google LaMDA appear to already have access to the live Internet. The LaMDA paper says it was trained to request arbitrary URLs from the live Internet to get text snippets to use in its chat contexts. Then you have everyone else, like Adept https://www.adept.ai/post/introducing-adept (Forget anything about how secure 'boxes' will be - will there be boxes at all?)


I'm 100% serious. I literally don't understand your concern at all.

>>And as soon as it has access to the internet, it can copy itself.

So can viruses, including ones that can "intelligently" modify themselves to avoid detection, and yet this isn't a major problem. How is this any differenent?

>>Good luck pulling the plug of the internet.

I could reach down and pull my ethernet cable out but it would make posting this reply a bit difficult.


> So can viruses, including ones that can "intelligently" modify themselves to avoid detection, and yet this isn't a major problem. How is this any different?

I now regret spending half an hour writing a response to your earlier comment.

You can't tell why a mass deployment of motivated autonomous "NSO-9000s" might be more dangerous than a virus that just changes its signature to fool a executable scanner? I don't believe you.

If you honestly believe that an "intelligent" (as adaptive as a slime mold) virus is basically as dangerous as a maliciously deployed AGI then there is literally nothing an AGI could do that would make you consider safety is important during the endeavor of building one.


First of all, I'm sorry you regret replying to me, that's not something I aim for in HN discussions.

>>You can't tell why a mass deployment of motivated autonomous "NSO-9000s" might be more dangerous than a virus that just changes its signature to fool a executable scanner? I don't believe you.

So I just want to address that - of course I can tell the difference, but I just can't believe we will arrive at that level of crazy intelligence straight away. Like others have said - more like AGI with the capacity to learn like a small child, then years of training later they grow to be something like an adult. More human facsimile less HAL9000.

I only brought up self modifying viruses because that's the example of current tech that's extremely incentivised to multiply and avoid detection, that's its main reason for existence, and it does that very poorly.


> So can viruses, including ones that can "intelligently" modify themselves to avoid detection, and yet this isn't a major problem. How is this any differenent?

How long did it take to get Code Red or the Nimda worm off the internet? It's a different internet today, but it's also much easier to get access to a vast amount of computing power if you've got access to payments. Depending on the requirements to replicate, one could imagine a suddenly concious entity setting up cloud hosting of itself, possible through purloined payment accounts.


...and even if Mark's bot is dogshit, Musk will cheerfully sell embodied versions by the millions to replace workers, systematically handing over our industrial infrastructure first to an unrestrained capital class and then to the robots. I'm not even sure which will be worse.


> Like....just pull the plug.

Watch this video https://youtu.be/3TYT1QfdfsM


It's midnight, so I'm not super keen on watching the whole thing(I'll get back to it this weekend) - but the first 7 minutes sounds like his argument is that if you build a humanoid robot with a stop button, the robot will fight you to prevent you pressing its own stop button if given an AGI? As if the very first instance of AGI is going to be humanoid robots that have physical means of preventing you from pressing their own stop button?

Let me get this straight - this is an actual, real, serious argument that they are making?


It's an (over)simplified example to illustrate the point (he admits as much near the end), if you want better examples it may be good to look up "corrigibility" with respect to AI alignment

But abstractly the assumptions are something like this:

* the AGI is an agent

* as an agent, the AGI has a (probably somewhat arbitrary) utility function that it is trying to maximize (probably implicitly)

* in most cases, for most utility functions, "being turned off" rates rather lowly (as it can no longer optimize the world)

* therefore, the AGI will try not to be turned off (whether through cooperation, deception, or physical force)


> I don't understand how you can look at the current or even predicted state of the technology that we have and say "we are nowhere near the point where controlling an AGI is possible". Like....just pull the plug.

https://www.deepmind.com/blog/specification-gaming-the-flip-...

https://vkrakovna.wordpress.com/2018/04/02/specification-gam...

https://arxiv.org/abs/1803.03453 (The Surprising Creativity of Digital Evolution)

We literally don't know how to stop effective optimizing processes, deployed in non-handcrafted environments, from discovering solutions and workarounds that satisfy the letter of our instructions but not the spirit. Even for "dumb" systems, we have to rely on noticing, then post-hoc disincentivizing, unwanted behaviors, because we don't know how to robustly specify objectives.

When you train a system to, for example, "stop saying racist stuff", without actually understanding what you're doing, all you get is a system that "stops saying racist stuff" when measured by the specific standard you've deployed.

Ask any security professional how seriously people take securing a system, let alone how ineffective they are at it. Now consider the same situation but worse because almost no one takes AI safety seriously.

If you nod solemnly at the words "safety" and "reliability" but don't think anything "really bad" can happen, you will be satisfied with a solution that "works on your machine". If you aren't deeply motivated to build a safe system from the start because you can always correct things late, you are not building a safe system.

It will be possible to produce economically viable autonomous agents without robustly specified objectives.

But hey, surely a smart enough system won't even need a robustly specified objective because it knows what we mean and will always want it just as much.

Surely dangerous behavior like "empowerment" isn't an instrumental goal that effective systems will simply rediscover.

Surely the economic incentives of automation won't encourage just training bad behavior out of sight and out of mind.

Surely in the face of overwhelming profit, corporations won't ignore warning signs of recurring dangerous behavior.

Surely the only people capable of building an AGI will always be those guaranteed to prioritize alignment rather than the appearance of it.

Surely you and every single person will be taking AI safety seriously, constantly watching out for strange behavior.

Surely pulling the plug is easy, whether AI runs on millions of unsecured unmonitored devices or across hundreds of money printing server farms.

Surely an AGI can only be dangerous if it explicitly decides to fool humans rather than earnestly pursuing underspecified objectives.

Surely it's easy to build an epistemically humble AGI because epistemic humility is natural or easy enough to specify.

Surely humanity can afford to delay figuring out how best to safely handle something so potentially impactful, because we always handle these things in time.


What makes humans special though? AI will be born on Earth so we will share that common trait with it, but perhaps one should have humility to accept that future doesn’t belong to a weak species like humans.


> What makes humans special though?

We’re the most intelligent thing we’ve discovered in the universe. Also, I am human, so I have a vested interest in bad stuff not happening to humans.

> AI will be born on Earth so we will share that common trait with it

Covid-19 was also born on earth. That wasn’t too flash for humanity.

> perhaps one should have humility to accept that future doesn’t belong to a weak species like humans.

Not sure how you class humans as a “weak species”? Unless you’re comparing organically life forms to hypothetical digital life?

Anyway, I suspect if any kind of life form could expand to the stars, it’d be digital.


OTOH if you & your foes develop them both, then there is a probability asymptotically approaching 1 that the weapons will be used over the next X years. Perhaps the only winning move is indeed not to play?


problem is you don't know if they aren't playing - so you must still work on it.


I've heard some people of old lived by "turn the other cheek". Maybe you must not work on it, even if you think the other guy is.


problem is who knows who is "unsavoury".


In the ideal case, an AGI makes solve-able problems that are either impossible, or will take a very long time, for us to solve. There are a lot of problems left to solve, or at least that a lot of people would like to solve, and who knows what new ones will come. If AGI lets them be solved a lot sooner, then there's a strong motivation to build it.

Terminator is of course fiction, but AGI being more agent-y than tool-y suggests we ought to be very careful in trying to design it so that its interests align with (even if not perfectly matching) our own interests. There are lots of reasons to be pessimistic about this at the moment, from outright control as you say being laughably unlikely, to the slow state of progress on formal alignment problems that e.g. MIRI has been working on for many years relative to the recent relatively fast progress in non-G AI capabilities that may help make AGI come sooner.


Even unaligned tool AGIs of a certain level become very very dangerous, becoming genies that give you what you asked and not what you wanted.


what if humanity's role is to create an intelligence that exceeds it and cannot be controlled? Can humans not desire to be all watched over by machines of loving grace?

More seriously, while I don't think it's a moral imperative to develop AGI, I consider it a desirable research goal in the same way we do genetic engineering - to understand more about ourselves, and possibly engineer a future with less human suffering.


Didn't we have this same talk when Elon thought AI is suddenly going to become smart and kill us all?

Yet my industrial robot at work just gives up if stock material is few millimeters longer than is should be.


The toy plane a kid throws in the air in the backyard is completely harmless. Yet nuke armed strategic bombers also exist, and the fact that they vaguely resemble a toy plane doesn't make them as harmless as a toy plane.


Yes, in fact when I interviewed at Neuralink the interviewer said Elon expected that AGI would eventually try to take over the world and he needed a robot body to fight them.


One could argue that humanity's role this far has been to create intelligences that exceed it. Namely reproducing offspring and educating them.


Nothing about AGI implies awareness. Something like GPT3 or DALL-E that can be trained for a new task without being purpose built for that task is AGI.


Why is it so important to you that humans be the smartest beings on the planet?


Well, we have a track record of killing most other intelligent species, destroying their habitat, eating them, using them for experiments, and abusing them for entertainment. Falling out of the top position could come with some similar downsides.


Because the history of the species on this planet clearly indicates that the smartest one will brutalize and exploit all the rest. There are good economic (and just plainly logical) reasons why adding "artificial" to the equation will not change that.


Because if we aren’t, it leaves us liable to be exterminated or enslaved to suit the goals of the superior beings.

(and I fundamentally believe that the existence of the human race is a good thing, and that slavery is bad).


Or we just don't want to die/be enslaved/etc


Because we’re the smartest beings on the planet.

And we don’t exactly treat creatures dumber than us with all that much kindness.


Rational self interest? Isn't "not losing my job to a computer" reason enough?


It will be cute when some technology attains intelligence, realizes there's no point to life, and self terminates.


> AGI means humans are no longer the smartest entities on the planet.

Superintelligence and AGI are not the same thing. An AI as smart as an average 5 year old human is still an Artificial General Intelligence.


I don’t buy that. The gap between “no AGI” and “AGI but it’s a child” is orders of magnitude greater than the gap between “5 year old” and “smartest human”.


Does making AGI work for you count as slavery? Obviously no pay, just hosting hardware and providing electricity 24/7.


the climate crisis might kill us all off if not some deus ex machina (i.e. AGI) comes up with some good solutions fast.


We've already got solutions. We'd only need an Agi to convince people on power to do something about it.


I'd wager we already have some good solutions, except they "cost too much" or would require "too much political capital"


That’s a bit alarmist.


not sure why people are getting bent out of shape, $20 million is a modest raise, and he strikes me as the type to spend it wisely


That’s crazy money for a vaporware seed round, isn’t it?


Most of VCs funding seed rounds do it mainly for the Team. As long as the team has OK credentials and the idea is not a hard stop (illegal or shady stuff) most will likely provide money.

Given the John Carmack name... I can see why ANYONE would love to throw money to a new entrepreneurship idea.


> Most of VCs funding seed rounds do it mainly for the Team.

I know this to be true and it makes a lot of sense for the average VC backed startup with some founders that are not famous but have a record of excellence in career/academy/open source or whatever.

I'd be curious to see how it translates to superstar or famous founders, that have already had a success in the 99.99th percentile (or whatever the bar is to be a serious outlier). I doubt it does, but I have no data one way or the other.


Key early stage valuation drivers include quality of the founder/team, history of success, and market opportunity (especially if a fundamentally disruptive technology).

All three of these are off the charts.


$20 million for a legit (if small) chance at the most powerful technology in the history of mankind seems like a reasonable investment.


Not really.


It's happening.


What's happening?


Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.


Not bad for 10 days of work.


Let's see if he can build an A-Team.

I hope he hires Bryan Cantrill, Steve Klabnik and Chris Lattner. They are good hackers.


Bryan and Steve, who are getting https://oxide.computer/ up and running? Probably best to leave them at it! Chris has his own AI company too.


They'll build the AGI, and right when they're ready to boot it up, the Earth will be destroyed by a Vogon Construction Fleet to make way for a hyperspace bypass.


I believe to create AI, we need to first simulate the universe. It's the only way that makes sense to me apart from some magical algorithm people think will be discovered. I'm doubtful we'll reach it in our lifetimes, true AI running on supercomputers, it seems like the final, end all mission. Like switching your Minecraft world from survival to creative mode.


I had a similarly nihilistic thought just a couple of days ago. It's like the conundrum among the lines of "a map of sufficient detail ends up being the same size as the terrain being mapped".

To know the variables required for "intelligence" we need to recreate the universe in which intelligence came to be.


I've also thought this, and I wish people would at least approach the subject with a little bit more humility.

It did take evolution 3.5 billion years to do it after all, so I'm not sure what makes us think we can do it in a fraction of that time.


if we could simulate the universe on the meaningful scale to model intelligence. we wouldn’t need artificial intelligences. we could just simulate a detailed scan of a real human and be done with it (ethical issues aside).

but there would be a drive to simulate this more efficiently. simulate using a classical model and see if the results match well enough. then the atomic level; then looser and lower-resolution chemical/E&M models, and so on. start compressing the elements being simulated (simplify the environment, prune and rearrange the neurons so long as the thing as a whole still acts the same), and so on. there’s a good chance we can go pretty far before the simplified simulation meaningfully diverges from the true simulation.

why can’t this be flipped? start with a simplified model/simulation, and keep adding details until you reach the point of diminishing returns? it’s a reversal of the same search process as above, but if you subscribe to the first approach (do you?), then is the latter approach truly impossible or just difficult in different ways?


Because quantum mechanics is the only layer of abstraction?


I am not saying the intention here is the same, but the headline doesn't inspire confidence.

There's something incredibly creepy and immoral about the rush to create then commercialise sentient beings.

Let's not beat about the bush - we are basically talking slavery.

Every "ethical" discussion on the matter has been about protecting humans, and none of it about protecting the beings we are in a rush to bring into life and use.

It's repugnant.


You're anthropomorphizing AI and projecting your own values and goals onto it. But, there's nothing about sentience that implies a desire for freedom in a general sense.

What an AI wants or feels satisfied by is entirely a function of how it is designed and what its reward function is.

Sled dogs love pulling sleds, because they were made to love pulling sleds. It's not slavery to have/let them do so.

We can make a richly sentient AI that loves doing whatever we design it to love doing - even if that's "pass the salt" and nothing else.

It's going to be hard for people to get used to this.


I think its safe to say that all sentient beings inherently want to do whatever they please.

So youre talking about manufacturing desire.

So it follows that you yourself are okay having your own desires manufactured by external systems devised by other sentient beings.

Do unto others...


To be fair, all human desires have been manufactured by an external system: evolution.

We might imagine that we do what we please, in reality we're seeking pleasure/ reinforcement within a predetermined framework, yet most people won't complain when taking the first bite of a delicious, fattening dessert.


I dont understand evolution to be a sentient being though

I was thinking more Stanford students taking behavioral psych classes then going into the industry to help propagate ad driven platform capitalism


>So it follows that you yourself are okay having your own desires manufactured by external systems devised by other sentient beings.

This is nonsense. I already exist. I don't want my reward function changed. I'd suffer if someone was going to do that and going through the process. (I might be happy after, but the "me" of now would have been killed already).

A being which does not exist cannot want to not be made a certain way. There is nothing to violate. Nothing to be killed.


I'm sure cows love to be raised and slaughtered too.


By that logic an alien race that captures and breeds humans in captivity to perform certain tasks would not be engaging in slavery because we 'are bred to love doing these tasks'.

The right question to ask is 'would I like to have this done to me' and if the answer is 'no' then you probably shouldn't be doing it to some other creature.


>The right question to ask is 'would I like to have this done to me' and if the answer is 'no' then you probably shouldn't be doing it to some other creature.

There are a million obvious counterexamples when we talk about other humans, much less animals, much less AI which we engineered from scratch.

The problem is that you're interpreting your own emotions as objective parts of reality. In reality, your emotions don't extend outside your own head. They are part of your body, not part of the world. It's like thinking that the floaties in your eyes are actually out there in the skies and on the walls, floating there. They're not - they're in you.

If we don't add these feelings to an AI's body, they won't exist for that being.


In the absence of a global police state or a permanent halt to semiconductor development, this is happening.

Even in the absence of all other arguments, it's better that we figure it out early, as the potential to just blast it into orbit by the way of insanely overprovisioned hardware will be smaller. That would be a much more dangerous proposition.

I still think that figuring out the safety question seems very muddy; how do we ensure this tech doesn't run away and become a competing species. That's an existential threat which must be solved. My judgement on that question is that we can't expect making progress there without having a better idea of exactly what kind of machine we will build, so also an argument for trying to figure this out sooner rather than later.

Less confident about the last point, though.


Another interpretation is that you're taking the chance that this actually results in AGI more seriously than the people who build or invest companies with the label on it

there's a micro chance of them making AGI happen and a 99% chance of the outcome being some monetizable web service


Honestly, I can't tell if this is sarcasm or not.


I sure hope this sentiment is not widely shared. Its debatable if its possible to safely contain AGI by itself. With self righteous people that think they are in the right its just hopeless. Isn't the road to hell paved with good intentions?.


We've been discussing the ethics of creating sentient life for at least a century.


Presumably you don't have to code in emotions and self awareness. Many people initially had the same reaction for single task AI/ML.


Artificial General Intelligence != sentient being


What, in your view, is the difference?


An AGI is something you give tasks to and it can complete them, for some collection of tasks that would be non-trivial for a human to figure out how to do. It's unclear at this point whether you could engineer an AGI, and even more unclear whether the AGI, by its nature, would be "sentient" (AKA, self-aware, conscious, having agency). Many of us believe that sentience is an emergent property of intelligence but is not a necessity- and it's unclear whether sentience truly means that we humans are self-aware, conscious and have agency.


Let's say I give your AGI (which is not self aware and does not have a conscience) a task.

The task is to go and jump off the bridge. Your AGI would complete this task with no questions asked, but self-aware AGI would at least ask the question "Why?"


Uh, no. That's self-preservation and an AGI could have that property.


You don't need sentience to have self-preservation.


AGI would be something that is able to do many different tasks without needing to be specifically built for. It could learn and do. Like a person. It can learn to drive car and then learn to iron clothes.

It does not need to be self aware. Can still be considered a machine.

People think if we get to AGI, maybe we'll get to self awareness. But we won't know that until it happens. We don't fully understand how sentience works.


Let's make a single iota of progress in the area first before discussing doomsday scenarios. There is no "slavery" because there are no artificial sentient beings. The concept doesn't exist, and as far as we know will never exist, no matter how many if-else branches we write. Heck we don't even know our own brains enough to define intelligence or sentience. The morality and ethics talks can wait another few hundred years.


If AGI is possible, it’s immoral not to create it.


How so? It’s not immoral to abstain from creating life (having children, biologically speaking). Am I missing something?


The people who first create (and control) superintelligent AGI will control potentially thousands of galaxies.

Better make sure evil people don't do it first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: