Call me a cynic but the fact that after almost 10 years of AI hype we are still working our way down the list of popular board games is a bit of a downer for me. I mean, having AIs to play Stratego, Risk, Go, Diplomacy and what have you against sure is nice. But there are literally billions of dollars spent on these projects and I really come to the point where I just don't believe anymore that the current AI approaches will ever generalize to the real world, even in relatively limited scopes, without the need for significant human intervention and/or monitoring. What am I missing?
AI remains better than humans at anything that has well defined rewards and small time gap between action and feedback mechanism (either naturally, like poker, or by value function engineering, like Go or Chess)
The problem here is that it's missing the "glue" to more real world applications. This is where more humdrum software engineering comes in.
Diplomacy in this is much more interesting than Stratego or beating the next video game - it mixes cooperative game theory with NLP and reinforcement learning.
> The problem here is that it's missing the "glue" to more real world applications. This is where more humdrum software engineering comes in.
This is a bold statement. The world does not function based on "well-defined rewards". The concept of "common sense", which some consider table stakes for a human operating competently in our world, is mostly made up of things which are neither well-defined, nor allow a tremendous amount of training data. Current ML approaches require both.
Correct, but there are likely applications of AI in strategic reasoning that are used in the real world (outside the obvious finance bots) by engineering a "translation layer" from real world constraints to AI-compatible value functions and back.
But in general, yes, this is why since 2013 we don't see AI making anywhere as massive strides everywhere as they do in boxed-in applications like games.
> The world does not function based on "well-defined rewards".
Humans can be neatly reduced to a "procreate, and make sure your progeny procreate" value function. A lot of apparent counterexamples to that turn out to make sense when 2nd order effects on progeny are considered.
It's been providing real value to me over the past day for practicing Spanish, explaining Machine Learning concepts, and doing fancy write-ups in LaTeX. And this one can't even use Google yet! (other research teams have already created models capable of doing so, it's only a matter of time until these innovations are brought together in one place)
If you took the word "Wikipedia", and threw all the letters up in the air, what are some possible things they'd spell once they fell on the ground again?
If you took the word "Wikipedia" and threw all the letters up in the air, some possible things they could spell once they fell on the ground again might include "kawipieda," "keiapwidi," "kipedwaia," "pwakieida," or "eipawikid." However, without being able to see the exact arrangement of the letters on the ground, it's impossible to say for sure what they would spell.
We don't know how well it is generalising to your input without knowing the training data. Failure cases however tell us a lot, such as the following:
Input: "If five cats can catch five mice in five minutes, how long will
it take one cat to catch one mouse?"
Output: "If five cats can catch five mice in five minutes, then it will take one cat one minute to catch one mouse. This is because the number of cats and mice are inversely proportional, so as the number of cats decreases, the time it takes for a cat to catch a mouse will increase. Similarly, as the number of mice decreases, the time it takes for a cat to catch a mouse will decrease. Therefore, if five cats can catch five mice in five minutes, then it will take one cat one minute to catch one mouse."
As you can see, it fails at simple logic because it fails to understand that the number of mice aren't fixed in the question. Whereas a human would immediately pick that up because they reason beyond x=5 and y=5.
Are you sure a human would immediately catch this? The question is somewhat ambiguous and I bet if you posed this question to many people they would take the oversimplified non-gotcha approach and simply say one minute for one mouse just like the AI. Of course if you abstract out there are so many other variables at play but within the confines of a simple word question the answer is not necessarily incorrect.
You could probably test this by asking a few friends this question and see what they say. Outside of pure math problems you can get into an infinite regress defining the underlying first principles behind any given assumption.
I am not claiming anything other than the fact that we do not know the training data therefore not much can be inferred about how well it generalises from some success case.
Quite interesting that it will make subtle errors in its otherwise reasonable-looking answer, e.g. "kipedwaia" has two "a"s; "kawipieda", "kipedwaia" and "pwakieida" have only two "i"s.
I have seen reports that it will happily hallucinate a plausible but wrong answer to all sorts of different prompts, intermixed with many mostly correct answers. It's interesting to think about how to place trust in such a system.
Peter Watts has a series called Rifters that explores this a little. "Smart gels" which are neural nets made up of a mishmash of cultured neruons and silicon that run most of society. They're trained just like neural nets today, and therefore their decision making process is basically a black box. They do a great job, but no one is really sure how they get there, but they work great and they're so much cheaper, so they who cares.
Anyhow spoiler alert, the neural nets running the virus response have been inadvertently trained to prefer simple systems over complex ones without anyone realizing, and decide that a planet with no life on it after being wiped out from the virus is infinitely more simple than the present one and starts helping it out instead of stopping it.
So short answer to your question is I would not place much if any trust and systems like that, in as far as anything that has high stakes, real world consequences.
This kind of AI is really useful, at least for creating video games. Game devs spend countless hours making AI which is still not as effective or realistic as real humans, a straightforward way to create good game AI from ML it would be huge.
As others mentioned, AI is making headspace in enterprise and accounting, and achieving the “last mile” of human work. Better image recognition for handwritten forms and mail, better content and sentiment analysis for reducing spam, robot arm tasks which are more and more complex (yet still tame compared to humans)
“AI” hype is indeed overrated. If you think we’re close to reaching the singularity or anything resembling skilled human work you will almost certainly be disappointed. We probably have decades of slow improvement, more and more of these “breakthroughs” which aren’t really amazing compared to a human 5-year old, and aren’t really going to revolutionize industry, but will nonetheless have practical benefits
Now game companies won't even own their ai! They will get to rent it by the minute from Google. The death of consumers having control over the software they pay for fucking suuuucks
Not neceessarily. Right now AIs like this take lots of compute and resources to train. But as we've seen with Stable Diffusion, it's likely in coming years we will scale them down and create more open-source ones so that indie devs can train them and players can run them on their own gaming GPU.
I suspect that a real artificial intelligence will require a handful of specialized subsystems linked together in the right way. I feel like the current method of research is to find problems similar to solved ones with a few challenges, then look to solve those specific challenges. Doing this for ever complex problems requires all sorts of unique solutions, and over time we find ways to combine and simplify those to get generalized systems. I don’t think the systems of today will generalize as you desire, but I think this method of research seems like a good one? What do you think?
Enterprise deployments. When IBM's Watson gets deployed to run your building's elevator scheduling, we don't hear about it. Nor do we really hear about AI powering hospital backend systems. Or eg fraud modeling at Visa. Twitter assuredly was doing ML on their backend as well.
Or on Youtube as attention-retainers. It's really good at drip-feeding content over weeks/months, and changes what videos are recommended based on the duration of the viewing session.
These board games are models of real human problems. And these reasoning and tree searching tasks are very general, and humans perform these very often in work and in personal life.
I agree that these specific models are not going to be useful outside of board games. But in the future when there is the opportunity for AIs to interact with the world for real, the this kind of research will allow AIs to dramatically outperform humans on these tasks.
There is work on zero-shot learning and on procedurally generated environments like Procgen but it doesn't go mainstream because the results aren't as cool to most as some new record on a known board game or video game.
I agree with with you. I watched live every Lee Sedol Go game in the first match. Lost some sleep due to the time difference. I was so excited. That was a long time ago. Now large language models get me more excited for progress in AI.
At least this new model is very different to the approach taken in Alpha Go.
The AI hype has been around for much longer than 10 years. Ray Kurzweil's book "The Singularity is Near" was first published in 2005. Deep Blue beat Garry Kasparov in 1997
There was a time when I thought that maybe there was something more to AI then a fancy statistical model when you need to fit non-linear data. But I’m solidly on the belief now that AI is precisely a very powerful statistical tool. I honestly think there was never any real strategy of getting AI to anything more then specialized learning for deeper inference using a lot of computational power.
Don’t get me wrong, using AI for that purpose is pretty amazing (but can also lead to some sketchy results if you don’t know what you are doing[1]) but pretending it will lead to some “general AI” is nothing but hype IMO. And teaching AI to play these board games better then a grandmaster only serves to increase that hype.
I’m not an AGI fanboy. I agree that the current line of inquiry (ie deep learning) won’t get us there. I think neurosymbolic reasoning is needed. That work is still nascent, and worse, we don’t have great ways to connect our current paradigm to it.
Now thinking about it, is there any need for an AGI model? Is this a classic case of a (magic) solution looking for a problem?
There are for sure use cases for inference models in generalized (or rather ill-understood; or even highly dynamic) non-linear systems, and deep learning models kind of ace at that—given enough training data and a lot of computational power. However I’m not really sure what we will use AGI for.