Hmm... I think training a program to fool another program is a much, much, much, much easier problem than driving a car.
As an example, you only have to learn how to emulate a single 'human like' pattern to fool the ML, where you target function is a simple fool or not fool ML - once you achieved fooled you are done.
Whereas to drive a car you need to deal with all the infinite variation in inputs you might experience in driving a car, with complex target functions ie safety versus speed.
The problem with relying on ML on one side is it can be gamed by ML on the other.