I can't recall where, but I read that Tesla or Google were actually using GTA to train their self-driving cars, because it is a spectacularly advanced simulation of driving through an urban environment, so they didn't have to build their own.
There was an interesting academic research paper that showed you could train in GTA and transfer over to the KITTI dataset and do ok: https://arxiv.org/abs/1610.01983
deepdrive.io creator here - I'm actually not affiliated with the Berkeley project of the same name. There's also a DeepDriving at Princeton plus plenty of other (mostly perception) projects using GTAV, so it can be confusing. I'm hoping the GTAV for self-driving car efforts can start to standardize around the Universe integration though. Having worked on it, I can say firsthand that the Universe architecture is definitely amenable to sending radar, lidar, controlling the camera, bounding boxes, segmentation, and other types of info that the various sub-fields of self-driving are interested in. Super-excited to see how people use it!