Hacker News new | past | comments | ask | show | jobs | submit login

Do the math. 7 cameras at 1280x960 with 16 bpp at maybe an average 30 FPS and you're talking 500 MiB/s of just raw image data you would need to record. Where is it going? Not on any sort of storage and not over the wire to Tesla, that is clear.

What curious people have found is that the car just sends a basic disengage report with maybe 10s of h264 video or spaced far apart raw images from the cameras.




Who the hell stores or sends raw image data? The training data that's used for offline training of the vision neural nets is probably all h264 encoded anyway.


Thanks.

You don't (always) need full resolution with 30 fps raw frames for training your model though. If you are looking for missing exit, stopping at a traffic light, etc. you need front facing camera (so 3) and a few raw frames (and the sensor is 12 bits not 16). One frame is 1.8MB raw, but if you train on YUV images, this would be even smaller. With a 200MB buffer in memory, you can keep 100 frames at any time (and you have 8GB local storage for storing while waiting to upload)

You said "any meaningful amount of data", I guess this is subjective!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: