Hacker News new | past | comments | ask | show | jobs | submit login

This is encouraging. If you're going to work on artificial general intelligence, a reasonable context in which to work on it is game NPCs. They have to operate in a world, interact with others, survive, and accomplish goals. Simulator technology is now good enough that you can do quite realistic worlds. Imagine The Sims, with a lot more internal smarts and real physics, as a base for work.

Robotics has the same issues, but you spend all your time fussing with the mechanical machinery. Carmack is a game developer; he can easily connect whatever he's doing to some kind of game engine.

(Back in the 1990s, I was headed in that direction, got stuck because physics engines were no good, made some progress on physics engines, and sold off that technology. Never got back to the AI part. I'd been headed in a direction we now think is a dead end, anyway. I was trying to use adaptive model-based control as a form of machine learning. You observe a black box's inputs and outputs and try to predict the black box. The internal model has delays, multipliers, integrators, and such. All of these have tuning parameters. You try to guess at the internal model, tune it, see what it gets wrong, try some permutations of the model, keep the winners, dump the losers, repeat. It turns out that the road to machine learning is a huge number of dumb nodes, not a small number of complicated ones. Oh well.)

Hi John! Animats was very cool! As you know game physics still kinda sucks for this work. Unity/Bullet/MuJoCo are the best we have and even they have limited body collision counts. Luckily we've now got some GPU physics acceleration, but IMO it's not enough.

What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects. My guess is that Google/Stadia or Unity/UnityML are better places to do that work than Facebook, but if Carmack decides to learn physics engines* and make a dent I'm sure he will.

Until our environments are rich and diverse our agents will remain limited.

*More, I'm sure his knowledge already exceeds most people's.

What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects.

Improbable tried to do that with Spatial OS. They spent $500 million on it.[1] Read the linked article. No big game company uses it, because they cut a deal with Google so their system has to run on Google's servers. It costs too much there, and Google can turn off your air supply any time they want to, so there's a huge business risk.

[1] https://improbable.io/blog/the-future-of-the-game-engine

Agree, as a game engine this might power some high-end Stadia games with crazy physics, but the real value is in high-complexity environments for virtual agents.

Interestingly companies like SideFX are also doing really interesting work in distributed simulations. (e.g. Houdini)

I did my msc thesis in AI back then writing a dedicated simulater for a specific robot used in autonomous systems research. You find that especially when trying to faithfully reproduce sensor signals you need to dive deep into not just the physics of e.g. infrared light, but also the specific electronic operation of the sensor itself.

But that kind of realism is not needed for all AGI research.

I also spent some years on using evolutionary algorithms to evolve control networks for simple robots. The computational resources available at the time were rather limited though. Should be more promising these days now that your commodity gaming pc can spew out in 30 minutes what back then took all the labs networked machines running each night for a few weeks.

Modeling everything realistically is super hard - any interaction with the real-world is so full of the weirdest unexpected electric and mechanical issues. Who hasn't tried it first-hand can't imagine the half of ways that will almost certainly go wrong on the first try :) ... but as you've said for developing the AGI as a concept simplified worlds should work just fine.

Indeed, I don't think humans themselves model the world realistically, I think they model the world close enough but are able to adapt their prediction process for situation when their prediction don't work and/or their knowledge is inadequate. To model such a "satisficing" process, you don't need exact simulation.

True. Simulations are always a simplification of reality and leave a lot out of the picture.

On the flip side, successful robotics concepts might have more chance of being relevant to AGI.

> This is encouraging. If you're going to work on artificial general intelligence, a reasonable context in which to work on it is game NPCs.

I don't think so. Game NPCs don't need AI, which would be way overkill; they just need to provide the illusion of agency. I think for general AI you need a field where any other option else would be suboptimal or inadequate, but in videogames general AI is the suboptimal option... more cost effective is to just fake it!

> Game NPCs don't need AI

> ... more cost effective is to just fake it!

Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.

A game that could make NPCs react to the what the player does dynamically while also creating a cohesive story for the player to experience would be absolutely groundbreaking in my opinion.

This is more in the realms of AI story generation but I haven't seen any work on this that generates stories you would ever mistake as coming from a human (please correct me if I'm wrong) so it would be amazing to see some progress here.

You're talking about different problems.

Story AI is basically having a writer sit down and writing a branching story tree with writing the whole way. At best it's a manually coded directed acyclic graph.

Tactical AI, ie having the bad guy soldiers move about the battlefield and shooting back at you in a realistic manner is 100% about faking it. It's better to despawn non-visible and badly placed enemies and spawn well placed non-visible enemies than have some super smart AI relocate the badly placed enemies into better locations. It's better to have simple mechanisms that lead to difficult to understand behavior than complex behavior that leads to instinctive behavior.

There was an amazing presentation at gdc maybe 3 years ago that perfectly articulated this. The game was something about rockets chasing each other. I wish I could find the link.

If you can find the link, please post it. I'm interested!

> Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.

That's not entirely true - it's just that no games studios are willing to compromise on graphics and art for something silly like the ability to impact the game world.

I'm not sure if you're being sarcastic about the "something silly" part, but do you have any examples of any games (indie, commercial or academic) that let you meaningfully impact the game world?

I think they don't exist because it's an exceptionally difficult problem, even for games with lo-fi graphics or text only. I've found it hard to find any AI projects that generate stories or plots that are remotely compelling.

Big studio game companies push "your choices matter" as a selling point as well, but few deliver.

> any examples of any games [...] that let you meaningfully impact the game world?

Dwarf Fortress

Fallout: New Vegas, a bunch of Telltale games, Myst/Riven, dozens of JRPGs (Chrono Trigger/Cross come to mind immediately) with branching endings and characters who survive or die based on player actions. Yeah, games are made all the time where the player has a meaningful impact on the game world.

Minecraft is successful precisely because you can meaningfully impact the game world.

Minecraft doesn't have an overarching story or complex NPCs though.

Agreed about the meaningful choices and dynamically generated reactions from NPC, but general AI is not needed for this in my opinion.

You also have to consider whether the complaints of "many" players matter when publishing a game. A percentage of vocal players will complain no matter what. Yes, they will complain even if you somehow implement true AI!

> Agreed about the meaningful choices and dynamically generated reactions from NPC, but general AI is not needed for this in my opinion.

Maybe, but it would be an impressive demonstration of AI that would be very different to what has shown for Go, Chess and StarCraft.

I think a compelling AI written short story for example would be leagues ahead of what is required to write a convincing chatbot e.g. you need an overarching plot, subplots, multiple characters interacting in the world, having to track characters beliefs and knowledge, tracking what the reader must be thinking/feeling.

It would likely rely a lot on understanding real-world and culture knowledge though - Go and StarCraft are much cleaner in comparison.

> A percentage of vocal players will complain no matter what.

Yep, but I can't think of a single game that has a plot that meaningfully adapts to how the player plays. Either there's many endings but the path to get to each is short, or all the choices converge quickly back into the same path.

Again, please correct me if I'm wrong but I've look quite hard for examples of innovation in the above recently and haven't found much. You can find examples of papers on e.g. automated story generation or game quest generation on Google Scholar from the last 10 years but the examples I found weren't that compelling.

AI-generated "true" fiction seems like scifi to me.

Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...

... what is "true" or "good" fiction is up for debate. In fact, it's a debate that can never be solved, because there is no right answer except what it feels to you, your friends and authors you respect.

But that said, I seriously doubt it would fool me, and I think it won't be within reach of an AI any time soon, or ever, not without creating an artificial human being from scratch. And maybe not even then, because how many real people can write compelling fiction anyway? :)

> Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...

So it feels like you should be able to procedurally generate stories at least something by combining common story arches, templates, character archetypes etc. without too much effort but I've yet to find any compelling examples of this anywhere. When you look into the problem more, you realise it's a lot harder than it seems.

We've seen lots of examples of chat bots that are said to pass the Turing Test but really aren't that intelligent at all so a "Turing Test of fiction writing" as you put sounds like a super interesting next step to me.

If his true purpose were to improve on videogames' NPCs, you're 100% right that working on "real" AGI would be overkill. But in this case, someone w/ deep background in videogaming intends to use that context as a means to the end of AGI R&D -- a possibly subtle, but crucial distinction.

If you have general AI of enough sophistication to use in a game, haven’t you just created a virtual human?

Yes, probably. At which point, the "videogame" part is irrelevant.

>more cost effective is to just fake it

I struggle to see the distinction. Isn't the turing test defined as 'faking humans (or human's intelligence) convincingly enough'?

There is a saying: The benefit to be smart is that you can pretend to be stupid. The opposite is more difficult.

"Fake it" as in cutting corners and performing sleighs of hand. Instead of moving enemy soldiers strategically, just spawn them close but out of sight, because the player won't know better. This doesn't help if you truly want to devise a military AI and is only useful for games. And that's just one example.

I think the Turing Test is no longer thought of as adequate metric for general AI (if it ever was to begin with).

I don't think parent was referring to game NPCs as a reasonable application of AGI, but rather as a reasonable domain conducive to the development of AGI.

Yes, I understand this and was replying to that interpretation. I think it's not a particularly conducive domain because the incentive is just to fake it (because that's enough for games). A better domain would be one where faking it just won't cut it.

Yeah I believe in this game / simulated world NPC idea too. To get the kind of complexity we want we either need sensors in the real world or interfacing in a virtual world that humans bring complexity to (probably both -- the humans are part of the sensing technology to start). Things like AlphaZero etc. got good cuz they had a simulatable model of the world (just a chess board + next state function in their case). We need increasingly complex and intetesting forms of that.

In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.

I've long taken the position that intelligence is mostly about getting through the next 10-30 seconds of life without screwing up. Not falling down, not running into stuff, not getting hurt, not breaking things, making some progress on the current task. Common sense. Most of animal brains, and a large fraction of the human brain, is devoted to managing that. On top of that is some kind of coarse planner giving goals to the lower level systems.

This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.

"Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.

Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog".[1] I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".

Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.

[1] http://people.csail.mit.edu/brooks/papers/CMAA-group.pdf

I remember this TED talk many years ago where the speaker proposes that intelligence is maximizing the future options available to you:



The video linked by the direct parent to my comment is a prank video.

Prank video? Satire at most. Did you even watch it?

The genius of Cog was that it provided an accepted common framework towards building a grounded embodied AI. Rod was the first I saw to have a literal roadmap on the wall of PhD thesis's laid out around a common research platform, Cog, in this branch of AI.

In a sense, the journey was the reward rather than the very unlikely short term outcome back then.

I was thinking about the manipulation issue tonight. I'd been throwing a tennis ball in the pool with my kids and I realised how instinctual my ability to catch was. A ball leaves my kids hands and I move my hand to a position, fingers just wide enough for a ball, and catch it. All of it happens in a fraction of a second.

The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second

I don't know if I'd call it modelling the physics of a ball in flight exactly. It kind of seems like the brain has evolved a pathway to be able to predict how ballistic projectiles - affected only by gravity and momentum - move, that it automatically applies to things.

What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.

You mean the brain has evolved over millennia to model the psychics of the world and specialize in catching and throwing things

Absolutely. It also requires more than the evolutionary adaptations to do it. The skill requires the catching individual to have practiced the specific motions enough times previously to become proficient to the point it becomes second nature.

Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.

Dogs can do this too. And quite a bit more impressive than most humans.

It’s always impressive to watch how good my dog is at anticipating the position of the ball way ahead of time.

If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.

I always feel like he’d make a great soccer goalie if he had a human body.

That's kind of the thesis Rodolfo Llinas puts forward in a book of his, I of the Vortex[0], although more about consciousness than intelligence. That is, consciousness is the machinery that developed in order for us to predict the next short while and control our body through it.

[0] https://mitpress.mit.edu/books/i-vortex

> On top of that is some kind of coarse planner giving goals to the lower level systems.

There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).

True, but AlphaGo is specialized on a very specific task where planning and deep thinking is a basic requirement for high level play.

We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.

That’s not true, human beings plan ahead when opening doors more than many things — should I try to open this bathroom door or will that make it awkward if it’s locked and I have to explain that to my coworker afterwards? Should I keep this door open for a while so the guy behind me gets through as well? Not to mention that people typically route plan at doorways.

Doors are basically planning triggers more than many things.

Horses don't plan though, and they are much better than computers at a lot of tasks. If we can make a computer as smart as a horse, then we can likely also make it as smart as a human by bolting some planning logic on top of that.

“Horses don’t plan though[...]”

Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.

Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:

1. Equestrian jumping events; horses often balk before a hurdle

2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.

The context was this quote:

> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.

Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”


> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.

But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.

In my experience they learn to open gates. They certainly aren't trained to do this, but learn from watching people or each other.

They will also open a gate to let another horse out of their stall which I would count as some form of planning.

Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.

>They can manage to be surprised by the same things everyday.

Sounds like most human beings, given an unpleasant stimulus, for example a spider.

Thank you for the context and new resources to learn from.

It took us millions/billions of years of evolution and a couple of years of training in real life to be able to walk through a door. It's not a simple task even for humans. It requires maintaining a dynamic equilibrium which is basically solving a differential equation just to keep from falling.

Board games have been solved. Now the big boys are working on Starcraft and Dota 2, and it takes a shitload of money to pay for the compute and simulation necessary to train them. No something you can do on the cheap.

Deepmind's StarCraft AIs are already competing at the Grandmaster level[0], which is the highest tier of competitive play and represents the top 0.6 % of competitors.

I am pleasantly surprised by how quickly they have been tackling big new decision spaces.


The next arena is multi task learning. Sure, I lose to specialized intelligences in each separate game, but I can beat the computer at basically every other game, including the game of coming up with new fun games.

Perhaps the first sentient program will be born in an MMORPG?

Just imagine all the exploits they'll find and abuse.

OpenAI has already done some experiments here [0]. All the way down at the bottom, under the "surprising behaviors" heading, 3 of the 4 examples involve the AIs finding bugs in the simulation and using it to their advantage. The 4th isn't a bug exactly, but a (missing) edge case in their behavior not initially anticipated.

[0] https://openai.com/blog/emergent-tool-use/

There's an entire Anime genre about that..


Ah, I knew anime would be useful someday

Read "Three Laws Lethal", by Walton.

Puzzle game with a great story. I recommend it to the HN people.

I loved it, but my issue with that game was severe motion sickness after 20-30 minutes... never finished it :(

Thanks for the warning, I cannot even play Minecraft. I wish Carmack had tackled motion sickness in VR/Games before switching to AI; he did talk about it in the interviews as being a limitation though.

There's a need gap[1] to solve Simulation Sickness in VR and First Person games.

[1]: https://needgap.com/problems/7-simulation-sickness-in-vr-and...

I was under the impression that simulation sickness was largely solved outside of extreme cases (like a vr portal game). I thought we're just waiting for hardware to catch up.

Years ago John said if you have 20k and a dedicated room you can make a convincing vr experience that won't make anyone sick.

I loved and finished the game but I had the same issue on two occasions. I felt sick and I had to stop. Now I know it was not the food I had just eaten but the game itself, thank you!

Who would have thought that a philosophical puzzle game could come from the creators of Serious Sam.

Yes but it sounds weird to me because Carmack has spent his whole life involved with games but has not been known for an interest in game AI before.

Game AI has nothing to do with AGI (or even regular AI) beyond the surface level description OP provided. The reason game AI hasn't progressed in the last few decades isn't because technology is holding us back - after all we can already achieve impressive machine learning feats using current-gen GPUs - it's because bad NPC AI is by design, so players can learn to overcome them and succeed. Very few people want to play a game that always beats them. Most games use simple state machines or behaviour trees with predictable outcomes for their NPCs because it would be a waste of effort to do anything more, and actually negatively impact the game by making it less fun and burning engineering time on things the player won't benefit from.

Modern big-budget games incresingly don't use behavior trees and state machines for their AI anymore. This approach has been superseded by technologies like GOAP [1] or HTN [2]. These are computationally very expensive, especially in the constrained computation budget of a real-time game.

While it's true that game AI is often held back by game design decisions, it's not true that technology isn't holding us back in this area as well.

[1] https://www.youtube.com/watch?v=gm7K68663rA (GDC Talk: Goal-Oriented Action Planning: Ten Years of AI Programming)

[2] https://en.wikipedia.org/wiki/Hierarchical_task_network

You don't optimize for competitive performance (it is trivial to design a game AI that beats every player every time given that you have control over tilting the playing field). You use the AI for bounded response variations (all NPC's act 'natural' and different from the others) and engaging procedural generation (Here is a chapter of a story, now draft an entire zone with landscape, NPC's, cities, quest story lines, etc.).

Games like PvE MMO's need to find a way to produce engaging content faster than it can be consumed at a pricepoint that is economically viable. The way they do it now is by having the players repeat the same content over and over again with a diminishing returns variable reward behavioral reinforcement system.

One of the design goals of game AIs is also that they are fun to play against. If they are too smart and coordinated, they try to throw you off in a way that feels "unfair" to the player.

You have to hit a spot where they are sometimes a bit surprising, but not in a way that cannot be reacted to quickly on your feet. This throws realism out of the window.

But why would good game AI have to make the characters better than the player? The focus on NPC AI should be to make them interesting, not necessarily really tough opponents.

You're assuming game AI means an agent that directly competes with the player.

Plenty of games have NPCs with scripted routines, dialog, triggers, etc that could be improved either by reducing the dev cost to generate them without reducing quality or reacting to player behavior more naturally.

Except in those cases it's even more important that the NPCs don't do anything unexpected. Those NPCs are like actors in a stage play, you don't want them to come up with their own lines and confusing the audience.

Don't forget there is a certain randomness with 'more natural' and with randomness you're going to invite Murphy to the party.

Not all NPCs have to be part of a script. They can just be additional characters that add life and realism to the simulated world.

A weapons maker with a unique backstory and realistic conversations that reference it is more interesting than a bot, and opens up the possibility of unscripted side-quests.

In many cases maybe. Personally I would love to play a game with a world inhabited by "individual" NPC AI:s, where they can influence the world as much as I can, with no specific act structure or story arc.

Some significant part of gaming is risk-free experimentation in a simulated world. The experiments possible are bounded by the simulation quality of the world. More realistic NPC behavior would open up for a lot more games.

There is an older game called STALKER which had (limited) elements of what you describe: autonomous NPCs which influence the game world. Even though it was limited, the NPCs just battled for control of certain territories, I always thought it was a really neat mechanic. It made the world feel more 'real' and alive.

You would see these factions fighting and gaining/losing territory throughout the game. You could chose to help them or just pass on by, but the actions progressed regardless of your choice.

It would be fun if they ad lib

I'd give anything for a "moral"/"nice-guy" AGI that could replace my Dota 2 team mates and opponents.

If the "game" is survival and selection for attention (to get compute space, so literal survival) from humans, "interestingness" is what will matter and I think what people will end up finding most interesting is NPCs that feel like other identities they can empathize with and interact with -- work with to build things, spend time in a community with, fall in love with and so on. This really is about virtual world construction more than simple competitive games. I think it may not end up looking like any particular sense of "AGI" we can currently imagine (I really think we can only properly imagine it exactly when it exists, and it seems not to yet), but it will probably be "distributed" enough that the interfacing may not feel like anything at any one particular site.

The game may even be played by saying things on Twitter and becoming interesting enough that people DM you and try to build a relationship with you, while you're a bot.

> it's because bad NPC AI is by design, so players can learn to overcome them and succeed.

That's part of it, but there are other factors too. The more complex the AI, the harder (i.e. more expensive) the game is to tune and test. Game producers and designers are naturally very uncomfortable shipping a game whose behavior they can't reasonably predict.

This is a big part of why gamers always talk about loving procedural generation in games but so few games actually do it. When the software can produce a combinatorial number of play experiences, it's really hard to ensure that most of the ones players will encounter are fun.

half. the other half was spent building rockets (armadillo aerospace) and VR tech, which arguably is more interesting in its AR industrial or transportation applications.

I love the idea of using the Sims as a platform, as it's a place where it will be blatantly obvious that 'effective' AI without built-in ethics is repulsively inhuman.

As a side note, if we're living in a simulation [0], I'd really like to know who's "real" vs. who's an AI bot out there...

[0] https://en.wikipedia.org/wiki/Simulation_hypothesis

Hate to break it to you, but we're all NPCs

It however has a huge bias towards human-like ai. Maybe it's not smart to narrow down to copying us so quickly.

I mean: maybe it's more efficient to have it read all of wikipedia really well before adding all the other noisy senses.

Simulator technology is now good enough that you can do quite realistic worlds.

It is nowhere near good enough to avoid running into Moravec’s Paradox like a brick wall as soon as you try and apply it outside the simulator.

I don't think that approach is going to work. For any clearly bounded and delineated task, such as a game, the most optimal, lowest energy and lowest cost solution is not AGI but a custom tuned specialist solver. This is why I don't think Deep Blue or Alphago are paths towards AGI. They are just very highly advanced single-task solvers.

Now Alphago and it's implementation framework are much more sophisticated than Deep Blue. It's actually a framework for making single-task solvers, but that's all. The fact it can make more than one single-task solver doesn't making it general in the sense we mean it in the term AGI. AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences. It's like an image classifier that can identify an apple, but has no idea what an apple is, or even what things are.

To build an AGI we need a way to genuinely model and manipulate objects, concepts and decisions. What's happened in the last few decades is we've skipped past all that hard work, to land on quick solutions to specific problems. That's achieved impressive, valuable results but I don't think it's a path to AGI. We need to go back to the hard problems of “computer models of the fundamental mechanisms of thought.”[0]


> AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences

There are indeed some people who learn chess by "reading the manual". Or learn a language by memorizing grammar rules. Or learn how to build a business by studying MBA business theories.

There are also tons of other people who do the opposite. They learn by simply doing and observing. I personally have no idea what an "adverb" is, but people seem perfectly happy with the way I write and communicate my thoughts. Would my English skills count as general intelligence, or am I just a pattern-recognition automaton? I won't dispute the pattern-recognition part, but I somehow don't feel like an automaton.

I can certainly see the potential upsides of learning some theory and reasoning from first principles. But that seems too high a bar for general intelligence. I would argue that the vast majority of human decisions and actions are made on the basis of pattern recognition, not reasoning from first principles.

One last note: "working out their consequences" sounds exactly like a lookahead decision tree

Alphago and it's kind are doing some things that we do, for sure. We do utilise pattern recognition and some of the neurological tools we bring to bear on these problems might look a bit like Alphago.

The thing is those are parts of our neurology that have little to do with general intelligence. I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. In that sense high level Go and Chess players turn themselves into single-task solvers. They're better at bringing that experience and capability to bear in other domains, because they have general intelligence with which to do so, but those specialised capabilities aren't what make them a being with general intelligence. Or if specialising systems are important to general intelligence, it's as just a part of a much broader and more sophisticated set of neurological systems.

I can't agree with you. I know many chess players at master and Grand Master level. Look at Bobby Fischer too. Human specialization does not carry over very well to other tasks, only marginally...

I don't think that's a disagreement, I think you're right. Most of the benefit someone like that would get from their competence in Chess or Go would be incidental ones. In fact I would say your experience confirms my understanding of this, optimizing for a single domain in the way Alphago does or even in the way humans do, has little to do with general intelligence.

Ah, I misread. So we agree then. On the other hand, I would not be surprised if the hippocampus was highly developed in chess players like Bobby Fischer which could translate into better spatial reasoning. Perhaps general intelligence is best trained by variance.... Not targeted training.

You could be right, and for the record I upvoted your comment for the contribution regarding you experience with high level chess players. I think the downvotes you’re getting are regrettable.

I appreciate your sentiment. This field is my focus right now. My bachelors is in BioChemMed but I am doing a master in CS and have finished many courses including the free ones by Hinton, LeCun, and Bengio.

Here is my strongest prediction:

AGI is only possible if the AGI is allowed to cause changes to its inputs.

Current ML needs to be grafted towards attention mechanisms and more boltzmann net / finite/infinite impulse response nets.

> "... if AGI is allowed to cause changes to its inputs."

Could you elaborate on this point ?

Do you mean that the AGI could change the source of inputs, or change the actual content of those inputs (e.g. filtering) or both?

And why do you think this is a critical piece ?

Both. Attention changes the source. Action interacts with the source, modifying it. But the environment will need to respond back. This is reminiscent of reinforcement training but is more traditional NN except where the input is dynamic and evolving with every batch not only in response to the agent but in response to differential equations or cellukar automata / some type of environment evolution. AGI should be able to change the environment in which it inhabits. Attention in some respects is a start - it is essentially equivalent to telling reality to move the page and watching it happen. Until we have attention AND data modification, we will keep getting the specialized NN we are used to.

But in this classification, isn't "general intelligence" just a meta-single-problem-solver, solving the problem of which single purpose solver to bring to bear on this task?

I think I think, but might I just be using a single problem solver that gives the appearance of thinking?

I suspect the way we think in terms of clear symbols and inference isn't actually how we think but a means of providing a post-hoc narrative to ourselves in a linguistic form.

Edit: Which kind of explains the failure of good-old fashioned symbolic AI as it was modelling the wrong thing.

That makes a lot of sense. An internal narrator on events explaining them to the passenger, rather than the driver of said events.

Definitely not my idea though - couldn't find any good references to where I read about that idea.

[NB I worked in good-old-fashioned AI for a number of years]

> AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them.

When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.

> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.

What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.

>When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.

It can't be applied to any problem though. Take the example i gave elsewhere of a game where you provide the rules, and as the game progresses the rules change. There are real games that work like this, generally card games where the cards contain the rules, so as more cards come into play the rules change. Alpha Zero cannot play such games, because there isn't even a way to provide it with the rules.

>> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. > >What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.

I'm saying that human minds apply many cognitive tools, and that Alphago is like one of those tools. It's not like the part choosing and deploying those tools, which is the really interesting and smart part of the system.

The human brain consists of a whole plethora of different cognitive mechanisms. Cognition is a broad term of a huge variety of mechanisms, none of which by themselves constitute all of intelligence. A lot of people look at Alphago and say aha, that's intelligence because it does something we do. Yes, but it only does a tiny, specialist fragment of what we do, and not even one of the most interesting parts.

In a Platonic dialogue, they discuss the definition of knowledge as "true belief with an account". You have true beliefs about language, but you don't know it in the Platonic sense if you can't explain it to someone else. Another way I've heard this defined is, you don't know it if you couldn't write the algorithm for it.

By that definition, most people don't have "knowledge" over most things which they believe and act on. And yet, no one accuses them of not possessing "general intelligence".

If an AI shows the same capabilities as the average human being, I would say that is AGI by definition. Regardless of whether it meets the requirement for Platonic Knowledge.

Really most people learn by a mix of the two. It’s going to take you a lot longer to learn chess is you have no clue on the rules.

But on the other hand if you get into rote memorization before you start the game it’s going to slow you down by having no context.

I think people do both. You learn the rules in order to play a game or perform a task, but with practice you end up training a task specific system that "knows" how to do it without thinking about the rules and perhaps without knowing them.

I think if you have a framework that can produce arbitrary single-task solvers (which AlphaZero can't yet), you would have something indistinguishable from AGI, since communication between single-task solvers is also kinda just a single-task solver.

It's certainly not the most efficient way to use our current hardware, and it's not clear to me how big some of these neural nets would have to be, but if we had computers with a trillion times the memory capacity and speed, IMO it'd certainly work on some level.

How would a single-task solver, or hierarchy of them, go about constructing a conceptual model of a new problem domain? The problem with a solver is it only really goes in a single direction, but when modeling a system you spend a huge amount of time backtracking and eliminating elements that yielded progress at first but then proved to be obstacles to progress later. You also need to be able to rapidly adapt to changing requirements.

Imagine playing a game of Chess in which the pieces and rules gradually changed bit by bit until by the end of the game you were playing Go. That's much closer to what real life problems are like and a human child could absolutely do that. They might not be much good at it, but they could absolutely do it even without ever having played either game before just learning as they went. Note to AGI researchers, if your chatbot can't cope with that or a problem like it without any forewarning, don't bother applying for a Turning Test with me on the other side of the teletype.

they'd do it like we do: by comparing the new situation to previous ones we know about, and applying the model that fits best, and then adapting to results.

For humans, the more previous ones we know about, the better, because we have more chance of applying a model that works in the new environment. That's called "experience".

That’s a very broad, general description of behaviour that doesn’t actually describe an implementation. In fact it could apply to many completely different possible implementations. I suspect though that humans do more than this, that we have a way of either constructing entirely new models from scratch, or of dramatically adapting models to new situations without mere iterative fitting to feedback. Humans are actually capable of reasoning effectively about entirely new ideas, scenarios and problems. We have little to no idea how we do this.

I don't know. I'm not so sure that we can create new working models from scratch. We definitely learn by iterative feedback: babies wiggle stuff and watch what happens to learn how to move their bodies. Learning to ride a bicycle is mostly about falling off bicycles until you learn how not to.

I've seen people apply their normal behaviour to situations that have changed, and then get totally confused (and angry) as to why the result isn't the same. Observe anyone travelling in a new country for examples ("why don't they show the price with the sales tax included here? This is ridiculous!").

In a perfect world, sure, we'd construct a rational mental model of a new situation and test it carefully to ensure it matched reality before trusting it, and then apply it correctly to the new situation. But it's not a perfect world, and people don't actually do that. Usually we charge in and then cope with the results.

Of course, I'm not saying that AI should do that. It'll be interesting to see how a "good" general AI copes with a genuinely new situation.

I think we apply radically different cognitive machinery to physical skills like riding a bicycle, compared to playing a card game where the rules are on the cards, and you have no idea what rule will be on the next card or even what rules are possible. We can train Chimps to ride bicycles, so they have the cognitive machinery for that, but we can't teach them to play these kinds of card games.

Interesting. True. But is that because we lack the communication skills to explain the rules to chimps, or because they lack the cognitive modelling ability to understand those rules?

Seems to me you're just describing reinforcement learning. Youre just saying a human child can adapt to the new problem faster than the AI can adapt, which is true but not the argument you've been making in this thread.

It's not reinforcement learning, because the child can do it the first time, so there's no reinforcement. I have kids, so many times I have played games with them successfully, purely from a description of the rules and playing as we went. They even beat me once the first time we ever played a game, by employing a rule at the end which had never come up in previous play. Compared to that Alphago isn't even in the race, because we can't even tell it the rules.

> since communication between single-task solvers is also kinda just a single-task solver.

It would be nice if it worked like that but I think you're massively underestimating the problem set here. I'd suggest its more like the architectural glue one needs as an engineer writing a command line util and a fully fledged piece of Enterprise solution (i.e. orders of magnitude).

Of course because we don't actually know how intelligence exactly works we're both guessing here.

The only way i think it can be done is simulated evolution.. be that simulated evolution of neural nets or something else.

As others have mentioned here though.. this becomes horrifying if we've created something sentient to kill in games or enslave.

I've been thinking for a while that use of AI in games might become a civil rights frontier in about 30 to 50 years or so

The open ended simulations, similar to earth conditions, might general enough to sprout some artificial general intelligence. Put multiple intelligences in a massive multiplayer online world and have them compete for shelter and resources. It's an environment that we know has produced intelligence.

It may be a brutal struggle, but perhaps that struggle is important. Perhaps having a simulated tree fall on you is more meaningful than being reaped by some objective function at the end of an epoch.

I think you’re on a potentially productive path, but it took 2 billion years of evolution in a staggeringly vast environment like that to produce results. The question is really how to shortcut that process, but training environments may well have a role to play.

Reverse engineering a human mind would be another approach.

This is often overlooked but it's the only approach that is pretty much guaranteed to succeed given enough time. That said, it's also likely that AGI will come about way earlier from another approach (just as planes came before the "robot birds").

While I agree this leads to haunting outcomes. e.g. If we create a successful interface then whats the point of building our own digital pastiches if we can just strap in the real thing?

Check out OpenWorm. They're trying to reverse engineer the simplest organism with a nervous system, a nematode with 302 neurons. They're making progress, but not very fast. That approach is going to be a long haul.

Ted Chiang wrote an interesting novella, The Lifecycle of Software Objects, about that very subject


Another one is Crystal Nights by Greg Egan. Full text here:


Or Iain Banks' take on the subject, in Surface Detail(https://en.wikipedia.org/wiki/Surface_Detail)

edit: wrong book ;)

> simulated evolution

Isn't that what genetic algorithms are?

yeah, one kind i guess. Or perhaps they cover all kinds?

well, "kill" becomes moot if the code is preserved. Like "killing" another player in a multi-player game. You're not actually killing them.

This may be a clunky analogy, but is this fundamentally different than a killing a human as long as we keep a record of their DNA sequence? Maintaining the information doesn't seem enough to negate snuffing out the execution of that information

The generic AI could be "playing" in a thousand virtual environments at once. Killing one of them doesn't really have a parallel in human life, or ethics.

I mean, yes, you killed a sentient being. But if that sentient being has a thousand concurrent lives, then what does "killing" one of those lives even mean? And if it can respawn another identical life in a millisecond, does it even count as killing?

I suspect that having sentient virtual entities will provide philosophy and ethics majors a lot of deep thinking room. As it already has for SciFi authors.

Would this opinion change if scientists were able to prove the multiverse theory where there are infinite numbers of "us" living concurrent lives as well?

Not if we persist the state of the mind before we turn them off. Can't do that with humans


> They are just very highly advanced single-task solvers.

What if AGI is just a combination of very highly advanced single-task solvers?

I happen to believe that it is an emergent behavior once the complexity gets high enough, so AGI might just be a (large?) collection of AlphaGo solvers connected to different inputs.

> It's like an image classifier that can identify an apple, but has no idea what an apple is

Kind of like humans then.

A child picks up an apple but doesn't know what an apple "is". It doesn't even have the vocabulary to describe it.

As adults we know what an apple is because we understand it as a concept, the ideal "apple", and can manipulate the concept into areas way outside the original concept (say, the phrase "apple of my eye").

The child does know that the apple is a thing though. That it’s a separate object that can be carried around. Computer vision ML systems don’t even know that!

All they know is how to recognize a common pattern on a pixel grid, after seeing a large number of examples, and then draw a box around it.

The fact that a child has a body and can manipulate the world with all 5 senses working in concert should not be underestimated.

A child comes pre-programmed to put things in their mouth. They also have very sophisticated reward functions built-in that identify tasty sugars entering their mouth.

Very quickly (assuming said child doesn't eat something too bad), in the absence of an external oracle, the child learns a very productive mental model of what an apple is.

This type of feedback loop seems eminently translatable to machine learning, assuming we can encode the concept space in a way that allows the model to be encoded and trained in a reasonable set of constraints

Right, but that’s actually just a tiny part of the puzzle. The cognitive machinery that knows about edibility, deco possibility (how objects can be decomposed into parts and have internal structure), compost leaves properties (how the parts of an Apple contribute to its attributes as a whole), it’s relationship and interactions with other objects in the environment. All of that cognitive architecture might be a target for your feedback loop, but isn’t a solver and won’t work like a solver.

Yes, but you did not manipulate the concept. You did not invent that phrase, but simply learned its meaning. A machine can do that.

Every child reinvents the concept. It wasn't there on birth, and the word and phrases didn't contain it. That's a bit difficult topic to wrap ones head around, but it is critically important to differ between signified (the concept) and signifier (the words etc.).

The child develops concepts and is able to create and evaluate inferences, and thus able to understand metaphors etc.

The concept is what most AI approaches lack. Googles image search can identify apples, and cherries, and probably can categorize both as fruits, but it can't infer that this probably contains seeds, is a living being etc.

Or even that it is a three dimensional object, or what that means.

You're the only person who mentioned metaphors here. My intuition tells me metaphors will be key to developing AGI. Metaphors literally generalize; they predict; they organize and catalogue. Formation, testing, and introspection of metaphors seems to be a way forward.

If you are interested in this direction of research: There is a big body of work regarding human cognitive processes and the role of metaphor. I would suggest "Philosophy in the flesh" by Lakoff and Johnson. A hefty work, but that was one of the publications that fundamentally changed my perspective on the human mind. The concept of embodied reasoning was eye-opening for me.

As I have an academic background in learning theory and developmental psychology, I'm pretty pessimistic about the current AI trend, autonomous driving etc. Most smart people in the field are chasing what are effectively more efficient regression functions for over 60 years now, and I almost never stumble upon approaches that have looked at what we know about actual human learning processes, development of the self etc.

Moravec's paradox[1] IMO should have been an inflection point for AI research. This is the level of problems AI research has to tackle if it ever wants to create AGI.

[1] https://en.wikipedia.org/wiki/Moravec%27s_paradox

Related: Martin Hilpert has an excellent lecture/video on metaphor, as part of his Cognitive Linguistics course. Well worth a watch if this is a topic that interests you.


> Imagine The Sims, with a lot more internal smarts and real physics, as a base for work.

Sounds like the start of a truly horrifying Black Mirror episode


The internet is so great

> Sounds like the start of a truly horrifying Black Mirror episode

That episode already exists.


Or a Philip K Dick novel that's so weird and prescient, nobody's been able to figure out how to make a movie from it.



>The Perky Pat Layouts itself is an interesting concept. Here's Dick, in the early 60's, coming up with the idea for virtual worlds. I mean, Second Life and other virtual worlds are just a mapping of the Perky Pat Layouts onto cyberspace. Today Facebook acts like the PP Layouts, taking people's minds off toil and work and letting them engage others in a shared virtual hallucination -- you're not actually physically with your friends, and they might not even be your friends.

>Dick’s description of the Can-D experience is essentially a description of virtual sex:

>“Her husband -- or his wife or both of them or everyone in the entire hovel -- could show up while he and Fran were in the state of translation. And their two bodies would be seated at proper distance one from the other; no wrong-doing could be observed, however prurient the observers were. Legally this had been ruled on: no co-habitation could be proved, and legal experts among the ruling UN authorities on Mars and the other colonies had tried -- and failed. While translated one could commit incest, murder, anything, and it remained from a juridicial standpoint a mere fantasy, an impotent wish only.”

>Another character says “when we chew Can-D and leave our bodies we die. And by dying we lose the weight of -- ... Sin.”

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact