We've worked with a number of self-driving companies like GM Cruise, nuTonomy, Voyage, Embark, and more to build high-quality training datasets quickly leveraging our API. Scale is the perfect platform for this work—we're focused on really high quality data produced by humans via API.
In every case of one of these labeling company, when I talk with them it's almost 100% manual (data augmentation or boostrapped iterative manual checking aside) - which of course makes sense right now.
My question is, how do you plan on eventually automating labeling? That is effectively an unsupervised system in the long run which of course is pretty well unsolved.
Do you use the customer provided images and your labels to create your own inference models? Seems like that would be an obvious part of any Venture pitch in this space, but potentially put you at odds with your customer base (companies like mine for example which generate a lot of visual data).
Anyway, the thing I was going to suggest was to allow click-and-drag with mouse to be used in order to look around. Also touch-and-drag on mobile devices. And provide on-screen buttons for mobile device users to move forwards and backwards and sideways.
Ah indeed so you do :)
I tried it now and like you said it works reasonably well. The pan-joystick is a bit sensitive on iPhone 7 Plus both in portrait and landscape mode IMO.
1. Is pushing API for LIDAR means that ScaleAPI is going to focus more on self-driving tech?
2. How do you see your tech to comparison to what comma.ai is doing?
3. What is the minimum resolution for lidar data to make meaningful annotation?