the quick way to tell between self-driving and mapping car sensor suites - density of the point cloud it collects, ie. type/number of sensors with the self-driving is unnecessary dense for mapping and their directionality - the self-driving is more forward looking as well as immediate vicinity of the car while the mapping is sideways and doesn't care about the immediate vicinity. I'd speculate that Apple, historically being weaker when it comes to [higher level] software, seems to have issues with building of dynamic scene perception from the big can Velodyne stream and are trying to solve it "hardware way" by structuring the perception data in more time/space consistent way inside limited sectors/planes.