Robotics has the same issues, but you spend all your time fussing with the mechanical machinery. Carmack is a game developer; he can easily connect whatever he's doing to some kind of game engine.
(Back in the 1990s, I was headed in that direction, got stuck because physics engines were no good, made some progress on physics engines, and sold off that technology. Never got back to the AI part. I'd been headed in a direction we now think is a dead end, anyway. I was trying to use adaptive model-based control as a form of machine learning. You observe a black box's inputs and outputs and try to predict the black box. The internal model has delays, multipliers, integrators, and such. All of these have tuning parameters. You try to guess at the internal model, tune it, see what it gets wrong, try some permutations of the model, keep the winners, dump the losers, repeat. It turns out that the road to machine learning is a huge number of dumb nodes, not a small number of complicated ones. Oh well.)
What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects. My guess is that Google/Stadia or Unity/UnityML are better places to do that work than Facebook, but if Carmack decides to learn physics engines* and make a dent I'm sure he will.
Until our environments are rich and diverse our agents will remain limited.
*More, I'm sure his knowledge already exceeds most people's.
Improbable tried to do that with Spatial OS. They spent $500 million on it. Read the linked article. No big game company uses it, because they cut a deal with Google so their system has to run on Google's servers. It costs too much there, and Google can turn off your air supply any time they want to, so there's a huge business risk.
Interestingly companies like SideFX are also doing really interesting work in distributed simulations. (e.g. Houdini)
But that kind of realism is not needed for all AGI research.
I also spent some years on using evolutionary algorithms to evolve control networks for simple robots. The computational resources available at the time were rather limited though. Should be more promising these days now that your commodity gaming pc can spew out in 30 minutes what back then took all the labs networked machines running each night for a few weeks.
On the flip side, successful robotics concepts might have more chance of being relevant to AGI.
I don't think so. Game NPCs don't need AI, which would be way overkill; they just need to provide the illusion of agency. I think for general AI you need a field where any other option else would be suboptimal or inadequate, but in videogames general AI is the suboptimal option... more cost effective is to just fake it!
> ... more cost effective is to just fake it!
Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.
A game that could make NPCs react to the what the player does dynamically while also creating a cohesive story for the player to experience would be absolutely groundbreaking in my opinion.
This is more in the realms of AI story generation but I haven't seen any work on this that generates stories you would ever mistake as coming from a human (please correct me if I'm wrong) so it would be amazing to see some progress here.
Story AI is basically having a writer sit down and writing a branching story tree with writing the whole way. At best it's a manually coded directed acyclic graph.
Tactical AI, ie having the bad guy soldiers move about the battlefield and shooting back at you in a realistic manner is 100% about faking it. It's better to despawn non-visible and badly placed enemies and spawn well placed non-visible enemies than have some super smart AI relocate the badly placed enemies into better locations. It's better to have simple mechanisms that lead to difficult to understand behavior than complex behavior that leads to instinctive behavior.
There was an amazing presentation at gdc maybe 3 years ago that perfectly articulated this. The game was something about rockets chasing each other. I wish I could find the link.
That's not entirely true - it's just that no games studios are willing to compromise on graphics and art for something silly like the ability to impact the game world.
I think they don't exist because it's an exceptionally difficult problem, even for games with lo-fi graphics or text only. I've found it hard to find any AI projects that generate stories or plots that are remotely compelling.
Big studio game companies push "your choices matter" as a selling point as well, but few deliver.
You also have to consider whether the complaints of "many" players matter when publishing a game. A percentage of vocal players will complain no matter what. Yes, they will complain even if you somehow implement true AI!
Maybe, but it would be an impressive demonstration of AI that would be very different to what has shown for Go, Chess and StarCraft.
I think a compelling AI written short story for example would be leagues ahead of what is required to write a convincing chatbot e.g. you need an overarching plot, subplots, multiple characters interacting in the world, having to track characters beliefs and knowledge, tracking what the reader must be thinking/feeling.
It would likely rely a lot on understanding real-world and culture knowledge though - Go and StarCraft are much cleaner in comparison.
> A percentage of vocal players will complain no matter what.
Yep, but I can't think of a single game that has a plot that meaningfully adapts to how the player plays. Either there's many endings but the path to get to each is short, or all the choices converge quickly back into the same path.
Again, please correct me if I'm wrong but I've look quite hard for examples of innovation in the above recently and haven't found much. You can find examples of papers on e.g. automated story generation or game quest generation on Google Scholar from the last 10 years but the examples I found weren't that compelling.
Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...
... what is "true" or "good" fiction is up for debate. In fact, it's a debate that can never be solved, because there is no right answer except what it feels to you, your friends and authors you respect.
But that said, I seriously doubt it would fool me, and I think it won't be within reach of an AI any time soon, or ever, not without creating an artificial human being from scratch. And maybe not even then, because how many real people can write compelling fiction anyway? :)
So it feels like you should be able to procedurally generate stories at least something by combining common story arches, templates, character archetypes etc. without too much effort but I've yet to find any compelling examples of this anywhere. When you look into the problem more, you realise it's a lot harder than it seems.
We've seen lots of examples of chat bots that are said to pass the Turing Test but really aren't that intelligent at all so a "Turing Test of fiction writing" as you put sounds like a super interesting next step to me.
I struggle to see the distinction. Isn't the turing test defined as 'faking humans (or human's intelligence) convincingly enough'?
There is a saying: The benefit to be smart is that you can pretend to be stupid. The opposite is more difficult.
I think the Turing Test is no longer thought of as adequate metric for general AI (if it ever was to begin with).
In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.
This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.
"Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.
Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog". I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".
Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.
In a sense, the journey was the reward rather than the very unlikely short term outcome back then.
The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second
What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.
Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.
If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.
I always feel like he’d make a great soccer goalie if he had a human body.
There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).
We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.
Doors are basically planning triggers more than many things.
Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.
Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:
1. Equestrian jumping events; horses often balk before a hurdle
2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.
> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up
In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.
Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”
This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.
But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.
They will also open a gate to let another horse out of their stall which I would count as some form of planning.
Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.
Sounds like most human beings, given an unpleasant stimulus, for example a spider.
I am pleasantly surprised by how quickly they have been tackling big new decision spaces.
OpenAI has already done some experiments here . All the way down at the bottom, under the "surprising behaviors" heading, 3 of the 4 examples involve the AIs finding bugs in the simulation and using it to their advantage. The 4th isn't a bug exactly, but a (missing) edge case in their behavior not initially anticipated.
There's a need gap to solve Simulation Sickness in VR and First Person games.
Years ago John said if you have 20k and a dedicated room you can make a convincing vr experience that won't make anyone sick.
While it's true that game AI is often held back by game design decisions, it's not true that technology isn't holding us back in this area as well.
 https://www.youtube.com/watch?v=gm7K68663rA (GDC Talk: Goal-Oriented Action Planning: Ten Years of AI Programming)
Games like PvE MMO's need to find a way to produce engaging content faster than it can be consumed at a pricepoint that is economically viable. The way they do it now is by having the players repeat the same content over and over again with a diminishing returns variable reward behavioral reinforcement system.
You have to hit a spot where they are sometimes a bit surprising, but not in a way that cannot be reacted to quickly on your feet. This throws realism out of the window.
Plenty of games have NPCs with scripted routines, dialog, triggers, etc that could be improved either by reducing the dev cost to generate them without reducing quality or reacting to player behavior more naturally.
Don't forget there is a certain randomness with 'more natural' and with randomness you're going to invite Murphy to the party.
A weapons maker with a unique backstory and realistic conversations that reference it is more interesting than a bot, and opens up the possibility of unscripted side-quests.
Some significant part of gaming is risk-free experimentation in a simulated world. The experiments possible are bounded by the simulation quality of the world. More realistic NPC behavior would open up for a lot more games.
You would see these factions fighting and gaining/losing territory throughout the game. You could chose to help them or just pass on by, but the actions progressed regardless of your choice.
The game may even be played by saying things on Twitter and becoming interesting enough that people DM you and try to build a relationship with you, while you're a bot.
That's part of it, but there are other factors too. The more complex the AI, the harder (i.e. more expensive) the game is to tune and test. Game producers and designers are naturally very uncomfortable shipping a game whose behavior they can't reasonably predict.
This is a big part of why gamers always talk about loving procedural generation in games but so few games actually do it. When the software can produce a combinatorial number of play experiences, it's really hard to ensure that most of the ones players will encounter are fun.
I mean: maybe it's more efficient to have it read all of wikipedia really well before adding all the other noisy senses.
It is nowhere near good enough to avoid running into Moravec’s Paradox like a brick wall as soon as you try and apply it outside the simulator.
Now Alphago and it's implementation framework are much more sophisticated than Deep Blue. It's actually a framework for making single-task solvers, but that's all. The fact it can make more than one single-task solver doesn't making it general in the sense we mean it in the term AGI. AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences. It's like an image classifier that can identify an apple, but has no idea what an apple is, or even what things are.
To build an AGI we need a way to genuinely model and manipulate objects, concepts and decisions. What's happened in the last few decades is we've skipped past all that hard work, to land on quick solutions to specific problems. That's achieved impressive, valuable results but I don't think it's a path to AGI. We need to go back to the hard problems of “computer models of the fundamental mechanisms of thought.”
There are indeed some people who learn chess by "reading the manual". Or learn a language by memorizing grammar rules. Or learn how to build a business by studying MBA business theories.
There are also tons of other people who do the opposite. They learn by simply doing and observing. I personally have no idea what an "adverb" is, but people seem perfectly happy with the way I write and communicate my thoughts. Would my English skills count as general intelligence, or am I just a pattern-recognition automaton? I won't dispute the pattern-recognition part, but I somehow don't feel like an automaton.
I can certainly see the potential upsides of learning some theory and reasoning from first principles. But that seems too high a bar for general intelligence. I would argue that the vast majority of human decisions and actions are made on the basis of pattern recognition, not reasoning from first principles.
One last note: "working out their consequences" sounds exactly like a lookahead decision tree
The thing is those are parts of our neurology that have little to do with general intelligence. I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. In that sense high level Go and Chess players turn themselves into single-task solvers. They're better at bringing that experience and capability to bear in other domains, because they have general intelligence with which to do so, but those specialised capabilities aren't what make them a being with general intelligence. Or if specialising systems are important to general intelligence, it's as just a part of a much broader and more sophisticated set of neurological systems.
Here is my strongest prediction:
AGI is only possible if the AGI is allowed to cause changes to its inputs.
Current ML needs to be grafted towards attention mechanisms and more boltzmann net / finite/infinite impulse response nets.
Could you elaborate on this point ?
Do you mean that the AGI could change the source of inputs, or change the actual content of those inputs (e.g. filtering) or both?
And why do you think this is a critical piece ?
I think I think, but might I just be using a single problem solver that gives the appearance of thinking?
Edit: Which kind of explains the failure of good-old fashioned symbolic AI as it was modelling the wrong thing.
[NB I worked in good-old-fashioned AI for a number of years]
When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.
> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.
What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.
It can't be applied to any problem though. Take the example i gave elsewhere of a game where you provide the rules, and as the game progresses the rules change. There are real games that work like this, generally card games where the cards contain the rules, so as more cards come into play the rules change. Alpha Zero cannot play such games, because there isn't even a way to provide it with the rules.
>> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.
>What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.
I'm saying that human minds apply many cognitive tools, and that Alphago is like one of those tools. It's not like the part choosing and deploying those tools, which is the really interesting and smart part of the system.
The human brain consists of a whole plethora of different cognitive mechanisms. Cognition is a broad term of a huge variety of mechanisms, none of which by themselves constitute all of intelligence. A lot of people look at Alphago and say aha, that's intelligence because it does something we do. Yes, but it only does a tiny, specialist fragment of what we do, and not even one of the most interesting parts.
If an AI shows the same capabilities as the average human being, I would say that is AGI by definition. Regardless of whether it meets the requirement for Platonic Knowledge.
But on the other hand if you get into rote memorization before you start the game it’s going to slow you down by having no context.
It's certainly not the most efficient way to use our current hardware, and it's not clear to me how big some of these neural nets would have to be, but if we had computers with a trillion times the memory capacity and speed, IMO it'd certainly work on some level.
Imagine playing a game of Chess in which the pieces and rules gradually changed bit by bit until by the end of the game you were playing Go. That's much closer to what real life problems are like and a human child could absolutely do that. They might not be much good at it, but they could absolutely do it even without ever having played either game before just learning as they went. Note to AGI researchers, if your chatbot can't cope with that or a problem like it without any forewarning, don't bother applying for a Turning Test with me on the other side of the teletype.
For humans, the more previous ones we know about, the better, because we have more chance of applying a model that works in the new environment. That's called "experience".
I've seen people apply their normal behaviour to situations that have changed, and then get totally confused (and angry) as to why the result isn't the same. Observe anyone travelling in a new country for examples ("why don't they show the price with the sales tax included here? This is ridiculous!").
In a perfect world, sure, we'd construct a rational mental model of a new situation and test it carefully to ensure it matched reality before trusting it, and then apply it correctly to the new situation. But it's not a perfect world, and people don't actually do that. Usually we charge in and then cope with the results.
Of course, I'm not saying that AI should do that. It'll be interesting to see how a "good" general AI copes with a genuinely new situation.
It would be nice if it worked like that but I think you're massively underestimating the problem set here. I'd suggest its more like the architectural glue one needs as an engineer writing a command line util and a fully fledged piece of Enterprise solution (i.e. orders of magnitude).
Of course because we don't actually know how intelligence exactly works we're both guessing here.
As others have mentioned here though.. this becomes horrifying if we've created something sentient to kill in games or enslave.
It may be a brutal struggle, but perhaps that struggle is important. Perhaps having a simulated tree fall on you is more meaningful than being reaped by some objective function at the end of an epoch.
edit: wrong book ;)
Isn't that what genetic algorithms are?
I mean, yes, you killed a sentient being. But if that sentient being has a thousand concurrent lives, then what does "killing" one of those lives even mean? And if it can respawn another identical life in a millisecond, does it even count as killing?
I suspect that having sentient virtual entities will provide philosophy and ethics majors a lot of deep thinking room. As it already has for SciFi authors.
What if AGI is just a combination of very highly advanced single-task solvers?
I happen to believe that it is an emergent behavior once the complexity gets high enough, so AGI might just be a (large?) collection of AlphaGo solvers connected to different inputs.
Kind of like humans then.
As adults we know what an apple is because we understand it as a concept, the ideal "apple", and can manipulate the concept into areas way outside the original concept (say, the phrase "apple of my eye").
All they know is how to recognize a common pattern on a pixel grid, after seeing a large number of examples, and then draw a box around it.
The fact that a child has a body and can manipulate the world with all 5 senses working in concert should not be underestimated.
Very quickly (assuming said child doesn't eat something too bad), in the absence of an external oracle, the child learns a very productive mental model of what an apple is.
This type of feedback loop seems eminently translatable to machine learning, assuming we can encode the concept space in a way that allows the model to be encoded and trained in a reasonable set of constraints
The child develops concepts and is able to create and evaluate inferences, and thus able to understand metaphors etc.
The concept is what most AI approaches lack. Googles image search can identify apples, and cherries, and probably can categorize both as fruits, but it can't infer that this probably contains seeds, is a living being etc.
As I have an academic background in learning theory and developmental psychology, I'm pretty pessimistic about the current AI trend, autonomous driving etc. Most smart people in the field are chasing what are effectively more efficient regression functions for over 60 years now, and I almost never stumble upon approaches that have looked at what we know about actual human learning processes, development of the self etc.
Moravec's paradox IMO should have been an inflection point for AI research. This is the level of problems AI research has to tackle if it ever wants to create AGI.
Sounds like the start of a truly horrifying Black Mirror episode
That episode already exists.
>The Perky Pat Layouts itself is an interesting concept. Here's Dick, in the early 60's, coming up with the idea for virtual worlds. I mean, Second Life and other virtual worlds are just a mapping of the Perky Pat Layouts onto cyberspace. Today Facebook acts like the PP Layouts, taking people's minds off toil and work and letting them engage others in a shared virtual hallucination -- you're not actually physically with your friends, and they might not even be your friends.
>Dick’s description of the Can-D experience is essentially a description of virtual sex:
>“Her husband -- or his wife or both of them or everyone in the entire hovel -- could show up while he and Fran were in the state of translation. And their two bodies would be seated at proper distance one from the other; no wrong-doing could be observed, however prurient the observers were. Legally this had been ruled on: no co-habitation could be proved, and legal experts among the ruling UN authorities on Mars and the other colonies had tried -- and failed. While translated one could commit incest, murder, anything, and it remained from a juridicial standpoint a mere fantasy, an impotent wish only.”
>Another character says “when we chew Can-D and leave our bodies we die. And by dying we lose the weight of -- ... Sin.”