
Show HN: Scale 3D, API for 3D labeling of LIDAR, camera, and radar data - ayw
https://www.scaleapi.com/sensor-fusion-annotation#9ad1290
======
ayw
Hey everyone! I'm Alex, CEO and co-founder of Scale. One of the biggest
bottlenecks to development in perception and vision for robotics and self-
driving companies has been the ability to label 3D data. The ability to label
LIDAR, camera, and radar data together has been able to massively accelerate
our customers' timelines.

We've worked with a number of self-driving companies like GM Cruise, nuTonomy,
Voyage, Embark, and more to build high-quality training datasets quickly
leveraging our API. Scale is the perfect platform for this work—we're focused
on really high quality data produced by humans via API.

~~~
lwhite726
hey alex, your link is not working for me!

~~~
ayw
Hey — what browser / device are you using?

~~~
lwhite726
chrome / desktop

------
bringtheaction
In the first demo you use WASD to move around. Personally I use a very
different keyboard layout but that's not what I was going to say but actually
while we're on the topic it should be noted that in some countries they use
AZERTY [0] which means that more people are actually affected by the
particular choice of WASD than just those of us who have chosen non-standard
layouts like Dvorak or Colemak so maybe consider making it possible for the
user to define keys to use after all even though I wasn't going to suggest
that.

Anyway, the thing I was going to suggest was to allow click-and-drag with
mouse to be used in order to look around. Also touch-and-drag on mobile
devices. And provide on-screen buttons for mobile device users to move
forwards and backwards and sideways.

[0]:
[https://en.wikipedia.org/wiki/AZERTY](https://en.wikipedia.org/wiki/AZERTY)

~~~
ayw
Thanks for the suggestion! If you try using it on mobile, we actually have
joysticks for you to use which might work reasonably well. I'd try it out.

~~~
bringtheaction
> we actually have joysticks for you to use which might work reasonably well.
> I'd try it out.

Ah indeed so you do :)

I tried it now and like you said it works reasonably well. The pan-joystick is
a bit sensitive on iPhone 7 Plus both in portrait and landscape mode IMO.

------
isawczuk
Hi @Alex. I love the job you are doing! Few questions:

1\. Is pushing API for LIDAR means that ScaleAPI is going to focus more on
self-driving tech?

2\. How do you see your tech to comparison to what comma.ai is doing?

3\. What is the minimum resolution for lidar data to make meaningful
annotation?

------
mhb_eng
Very cool! Have you considered applying this technique for labeling Building
Infrastructure Management (BIM) Pointcloud data? One of the challenges when
dealing with as-built BIM capture is understanding exactly how point clouds
are mapped to real-world features.

~~~
ayw
Thanks for the tip. We'll definitely explore that data type!

------
zawerf
How do humans generate these labels? Not from this field and I am curious
about the UI aspect of it. It's not like you can give each of your min-wage
labelers a VR headset/controller to draw bounding boxes with.

~~~
ayw
That's part of the secret sauce :) But we work very hard to find ways to allow
our Scalers to label incredibly complex data.

------
xemoka
Wait... I can't be the only one thinking "these are humans?"

~~~
ayw
It's amazing what humans can do :)

