Hacker News new | past | comments | ask | show | jobs | submit login

Yeah I believe in this game / simulated world NPC idea too. To get the kind of complexity we want we either need sensors in the real world or interfacing in a virtual world that humans bring complexity to (probably both -- the humans are part of the sensing technology to start). Things like AlphaZero etc. got good cuz they had a simulatable model of the world (just a chess board + next state function in their case). We need increasingly complex and intetesting forms of that.

In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.




I've long taken the position that intelligence is mostly about getting through the next 10-30 seconds of life without screwing up. Not falling down, not running into stuff, not getting hurt, not breaking things, making some progress on the current task. Common sense. Most of animal brains, and a large fraction of the human brain, is devoted to managing that. On top of that is some kind of coarse planner giving goals to the lower level systems.

This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.

"Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.

Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog".[1] I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".

Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.

[1] http://people.csail.mit.edu/brooks/papers/CMAA-group.pdf


I remember this TED talk many years ago where the speaker proposes that intelligence is maximizing the future options available to you:

https://www.ted.com/talks/alex_wissner_gross_a_new_equation_...


[flagged]


The video linked by the direct parent to my comment is a prank video.


Prank video? Satire at most. Did you even watch it?


The genius of Cog was that it provided an accepted common framework towards building a grounded embodied AI. Rod was the first I saw to have a literal roadmap on the wall of PhD thesis's laid out around a common research platform, Cog, in this branch of AI.

In a sense, the journey was the reward rather than the very unlikely short term outcome back then.


I was thinking about the manipulation issue tonight. I'd been throwing a tennis ball in the pool with my kids and I realised how instinctual my ability to catch was. A ball leaves my kids hands and I move my hand to a position, fingers just wide enough for a ball, and catch it. All of it happens in a fraction of a second.

The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second


I don't know if I'd call it modelling the physics of a ball in flight exactly. It kind of seems like the brain has evolved a pathway to be able to predict how ballistic projectiles - affected only by gravity and momentum - move, that it automatically applies to things.

What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.


You mean the brain has evolved over millennia to model the psychics of the world and specialize in catching and throwing things


Absolutely. It also requires more than the evolutionary adaptations to do it. The skill requires the catching individual to have practiced the specific motions enough times previously to become proficient to the point it becomes second nature.

Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.


Dogs can do this too. And quite a bit more impressive than most humans.


It’s always impressive to watch how good my dog is at anticipating the position of the ball way ahead of time.

If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.

I always feel like he’d make a great soccer goalie if he had a human body.


That's kind of the thesis Rodolfo Llinas puts forward in a book of his, I of the Vortex[0], although more about consciousness than intelligence. That is, consciousness is the machinery that developed in order for us to predict the next short while and control our body through it.

[0] https://mitpress.mit.edu/books/i-vortex


> On top of that is some kind of coarse planner giving goals to the lower level systems.

There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).


True, but AlphaGo is specialized on a very specific task where planning and deep thinking is a basic requirement for high level play.

We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.


That’s not true, human beings plan ahead when opening doors more than many things — should I try to open this bathroom door or will that make it awkward if it’s locked and I have to explain that to my coworker afterwards? Should I keep this door open for a while so the guy behind me gets through as well? Not to mention that people typically route plan at doorways.

Doors are basically planning triggers more than many things.


Horses don't plan though, and they are much better than computers at a lot of tasks. If we can make a computer as smart as a horse, then we can likely also make it as smart as a human by bolting some planning logic on top of that.


“Horses don’t plan though[...]”

Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.

Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:

1. Equestrian jumping events; horses often balk before a hurdle

2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.


The context was this quote:

> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.

Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”

https://thehorse.com/16967/navigating-barriers-can-horses-wa...


> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.

But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.


In my experience they learn to open gates. They certainly aren't trained to do this, but learn from watching people or each other.

They will also open a gate to let another horse out of their stall which I would count as some form of planning.

Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.


>They can manage to be surprised by the same things everyday.

Sounds like most human beings, given an unpleasant stimulus, for example a spider.


Thank you for the context and new resources to learn from.


It took us millions/billions of years of evolution and a couple of years of training in real life to be able to walk through a door. It's not a simple task even for humans. It requires maintaining a dynamic equilibrium which is basically solving a differential equation just to keep from falling.


Board games have been solved. Now the big boys are working on Starcraft and Dota 2, and it takes a shitload of money to pay for the compute and simulation necessary to train them. No something you can do on the cheap.


Deepmind's StarCraft AIs are already competing at the Grandmaster level[0], which is the highest tier of competitive play and represents the top 0.6 % of competitors.

I am pleasantly surprised by how quickly they have been tackling big new decision spaces.

[0]https://deepmind.com/blog/article/AlphaStar-Grandmaster-leve...


The next arena is multi task learning. Sure, I lose to specialized intelligences in each separate game, but I can beat the computer at basically every other game, including the game of coming up with new fun games.


Perhaps the first sentient program will be born in an MMORPG?


Just imagine all the exploits they'll find and abuse.

OpenAI has already done some experiments here [0]. All the way down at the bottom, under the "surprising behaviors" heading, 3 of the 4 examples involve the AIs finding bugs in the simulation and using it to their advantage. The 4th isn't a bug exactly, but a (missing) edge case in their behavior not initially anticipated.

[0] https://openai.com/blog/emergent-tool-use/


There's an entire Anime genre about that..

https://en.wikipedia.org/wiki/Isekai


Ah, I knew anime would be useful someday



Read "Three Laws Lethal", by Walton.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: