AI is terrible at dealing with unexpected events. Games AI is good at are relatively deterministic, i.e. all possible outcomes are known. Replicating art and images is the same way.
If you could script new combat units in a video game on the fly or tweak the rules slightly, the human would slaughter the AI when an equally skilled human opponent would not lose so easily or even necessarily lose at all. You can see this in games like Galactic Civilizations where you can build your own units and unusual combos confuse the heck out of the computer opponent.
Same with cars. The entire approach is currently based around exposing the AI to every possible outcome. I remember a seminar on AI Safety where the vehicle AI had a problem with plastic bags in the air and it would swerve to avoid them. No human would have an issue with that.
I worked in innovation for a bank looking at automating all these kinds of things and even spent a few days doing the jobs (and this was eye-opening) and I was a developer, so not a manager looking at a job spec, but someone who would have actually done the work. 90% of the job could be automated, but 10% was dealing with wacky exceptions, many of which they had never specifically seen before. We had someone who had the job of taking PDFs and extracting tables of income and expenses. They were generally standardized PDFs, so that seems like something good to automate right?
Well, no. As tons of the financial advisors had added custom rows which the person doing the input had to interpret into another column. It was quite eye-opening that while the jobs were menial data entry, it was nowhere near as mundane as one might imagine as the guy was still making a judgment call on whether to classify "farm income" under a person's investment income category or whether to classify it as regular income for the purpose of investment advice.
I have a friend currently on a robotic process automation internship with another bank. Same issue. When the RPA dev people actually go and do these jobs, they realize that is frequently deviates from the approved job spec with the people in them making small but significant judgment calls.
It is not a lack of knowledge about what AI can do in either of those cases. It is not a lack of data as both banks have armies of people doing it and millions of clients. It is that for AI to do the job, all manner of other things would need to be standardized and reformed and if that were done, why use AI to solve the problem in the first place as a lot of it would simply be computational.
>You can see this in games like Galactic Civilizations where you can build your own units and unusual combos confuse the heck out of the computer opponent.
AIs will do the same to humans when trained against other machines, instead of being trained on human match data. Since the AIs will try out things most humans would think are illegal, thus not use them in regular matches. Like when Chamley-Watson first struck an opponent with his foil from behind his back.
If you could script new combat units in a video game on the fly or tweak the rules slightly, the human would slaughter the AI when an equally skilled human opponent would not lose so easily or even necessarily lose at all. You can see this in games like Galactic Civilizations where you can build your own units and unusual combos confuse the heck out of the computer opponent.
Same with cars. The entire approach is currently based around exposing the AI to every possible outcome. I remember a seminar on AI Safety where the vehicle AI had a problem with plastic bags in the air and it would swerve to avoid them. No human would have an issue with that.
I worked in innovation for a bank looking at automating all these kinds of things and even spent a few days doing the jobs (and this was eye-opening) and I was a developer, so not a manager looking at a job spec, but someone who would have actually done the work. 90% of the job could be automated, but 10% was dealing with wacky exceptions, many of which they had never specifically seen before. We had someone who had the job of taking PDFs and extracting tables of income and expenses. They were generally standardized PDFs, so that seems like something good to automate right?
Well, no. As tons of the financial advisors had added custom rows which the person doing the input had to interpret into another column. It was quite eye-opening that while the jobs were menial data entry, it was nowhere near as mundane as one might imagine as the guy was still making a judgment call on whether to classify "farm income" under a person's investment income category or whether to classify it as regular income for the purpose of investment advice.
I have a friend currently on a robotic process automation internship with another bank. Same issue. When the RPA dev people actually go and do these jobs, they realize that is frequently deviates from the approved job spec with the people in them making small but significant judgment calls.
It is not a lack of knowledge about what AI can do in either of those cases. It is not a lack of data as both banks have armies of people doing it and millions of clients. It is that for AI to do the job, all manner of other things would need to be standardized and reformed and if that were done, why use AI to solve the problem in the first place as a lot of it would simply be computational.