Hacker News new | past | comments | ask | show | jobs | submit login

AI algorithms are split into training and inference.

Training is the process of generating the parameters for the neural network, it usually takes significant resources to train and is a very hard process.

Inference is the process of using a pre-trained neural network to perform predictions.

Tesla's car have hardware to perform inference. From working on the field for several years, I do believe that they have hardware capable of performing inference (driving a car).

However, the hardware that trains the neural network is comprised of hundreds, if not thousands or tens of thousands of GPUs, that hardware lives in Tesla's data centers (or some other cloud provider).

The really hard part of this project is on training, not inference. So while there is some risk that the car (inference) hardware might require an upgrade, this seems very unlikely to me.




Thanks? I'm a ML researcher. I thought the comment about ViTs would have been a dead giveaway. Is this nerd-splaining? Because it is really demeaning and you're missing the entire point.

While you're right, inference is not training but accuracy and inference speed is still important and hardware dependent. YOLO still can be faster and more accurate. Everything I said still holds true. You want to do inference on larger images because larger images have more information. More information helps you get better accuracy. You want bigger networks because bigger networks correlate strongly with higher accuracy. If you read any object detection paper you will see different sized models showing these tradeoffs. You want faster hardware because you still want your algorithms to be fast. Inference is still done in the car. Better hardware still makes all that perform better.

> The really hard part of this project is on training, not inference.

If you pay attention to the YOLO series or other real time object detection you will know that inference is still something being worked on. Inference speed matters. Throughput (different metric) matters. Hardware helps with both even if we use the same pretrained network. A network will have faster inference and higher throughput on an Ampere card than on a Kepler. So I'm not sure what you're debating here, it really feels like you're just trying to tell me you know machine learning but missed strong clues that you're talking to someone else who is aware.

TLDR: hardware still matters.


So much of the discussion seems focused on images recognition; how much of the full autonomous solution is the perception problem vs other elements of driving?

From the outside I’ve wondered if the missed schedule targets around autonomous driving is because we’ve conflated “solving” the perception problem with solving self-driving. Just curious what your take is on that


> I do believe that they have hardware capable of performing inference (driving a car).

Since it has never been done we still don't know what's needed, so even if you worked in the field that's only intuition, a bit light for big commercial claims IMHO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: