Hacker News new | comments | ask | show | jobs | submit login
Show HN: Scale 3D, API for 3D labeling of LIDAR, camera, and radar data (scaleapi.com)
85 points by ayw 12 months ago | hide | past | web | favorite | 16 comments

Hey everyone! I'm Alex, CEO and co-founder of Scale. One of the biggest bottlenecks to development in perception and vision for robotics and self-driving companies has been the ability to label 3D data. The ability to label LIDAR, camera, and radar data together has been able to massively accelerate our customers' timelines.

We've worked with a number of self-driving companies like GM Cruise, nuTonomy, Voyage, Embark, and more to build high-quality training datasets quickly leveraging our API. Scale is the perfect platform for this work—we're focused on really high quality data produced by humans via API.

There are quite a few labeling companies out there now, and it seems like you're differentiated on accepting pointclouds, which is great.

In every case of one of these labeling company, when I talk with them it's almost 100% manual (data augmentation or boostrapped iterative manual checking aside) - which of course makes sense right now.

My question is, how do you plan on eventually automating labeling? That is effectively an unsupervised system in the long run which of course is pretty well unsolved.

Do you use the customer provided images and your labels to create your own inference models? Seems like that would be an obvious part of any Venture pitch in this space, but potentially put you at odds with your customer base (companies like mine for example which generate a lot of visual data).

hey alex, your link is not working for me!

Hey — what browser / device are you using?

chrome / desktop

In the first demo you use WASD to move around. Personally I use a very different keyboard layout but that's not what I was going to say but actually while we're on the topic it should be noted that in some countries they use AZERTY [0] which means that more people are actually affected by the particular choice of WASD than just those of us who have chosen non-standard layouts like Dvorak or Colemak so maybe consider making it possible for the user to define keys to use after all even though I wasn't going to suggest that.

Anyway, the thing I was going to suggest was to allow click-and-drag with mouse to be used in order to look around. Also touch-and-drag on mobile devices. And provide on-screen buttons for mobile device users to move forwards and backwards and sideways.

[0]: https://en.wikipedia.org/wiki/AZERTY

Thanks for the suggestion! If you try using it on mobile, we actually have joysticks for you to use which might work reasonably well. I'd try it out.

> we actually have joysticks for you to use which might work reasonably well. I'd try it out.

Ah indeed so you do :)

I tried it now and like you said it works reasonably well. The pan-joystick is a bit sensitive on iPhone 7 Plus both in portrait and landscape mode IMO.

Funny you mention that, as on X compact/Stock Oreo/Firefox the whole page is unusable. Okay, may be the browser.

Hi @Alex. I love the job you are doing! Few questions:

1. Is pushing API for LIDAR means that ScaleAPI is going to focus more on self-driving tech?

2. How do you see your tech to comparison to what comma.ai is doing?

3. What is the minimum resolution for lidar data to make meaningful annotation?

Very cool! Have you considered applying this technique for labeling Building Infrastructure Management (BIM) Pointcloud data? One of the challenges when dealing with as-built BIM capture is understanding exactly how point clouds are mapped to real-world features.

Thanks for the tip. We'll definitely explore that data type!

How do humans generate these labels? Not from this field and I am curious about the UI aspect of it. It's not like you can give each of your min-wage labelers a VR headset/controller to draw bounding boxes with.

That's part of the secret sauce :) But we work very hard to find ways to allow our Scalers to label incredibly complex data.

Wait... I can't be the only one thinking "these are humans?"

It's amazing what humans can do :)

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact