1. Incorrect. Comparing human performance on _recognition_ over imagenet != comparing human total visual ability (object detection, which is a superset of recognition, + tracking) to a machine. Machines are far worse at accurately saying what + where things are + what they will do next. For example, cyclists move very quickly. Can your system not only see them but understand their intent with extremely high accuracy? What if your camera is occluded even partially, what failure mode do you have?
2. That's just wrong. If your image is washed out from low sun angle, all 200fps are washed out and you might misconstrue the red light for a green one.
3. You need n+1 for each kind of detector resilient to different failure modes to call yourself fault tolerant. 2 cameras both suceptible to the same problems isn't fully fault tolerant. Camera and radar have overlapping and joint uses but don't overlap enough e.g if radar fails camera can't do the same things the radar can.
4. This is unsubstantiated. You really think other sdc companies aren't collecting data from cars on the road? What do you think all those cars in mountain view and SF you see are for? What evidence do you have that Tesla is "looking at all cases" -- do they even have the infrastructure necessary to simulate their car's behavior over long-tail situations? (Hint: rumor is it's pretty lacking)
5. The serious L4+ companies e.g Uber are pretty good at driving highways. They're all working on the next phase of the problem.
5. See this is why I know you not actually reading the papers and seeing the comparison of their performance cause uber is doing terribly on the highway and otherwise. Raquel Urtasun already got her program shut down for 9 months for terrible performance. That has never happened to Tesla.
This is more due to regulatory games than performance. Tesla's aren't regulated like L4+ autonomous vehicles, so when they kill people (which they have!) they don't get shut down.
2. That's just wrong. If your image is washed out from low sun angle, all 200fps are washed out and you might misconstrue the red light for a green one.
3. You need n+1 for each kind of detector resilient to different failure modes to call yourself fault tolerant. 2 cameras both suceptible to the same problems isn't fully fault tolerant. Camera and radar have overlapping and joint uses but don't overlap enough e.g if radar fails camera can't do the same things the radar can.
4. This is unsubstantiated. You really think other sdc companies aren't collecting data from cars on the road? What do you think all those cars in mountain view and SF you see are for? What evidence do you have that Tesla is "looking at all cases" -- do they even have the infrastructure necessary to simulate their car's behavior over long-tail situations? (Hint: rumor is it's pretty lacking)
5. The serious L4+ companies e.g Uber are pretty good at driving highways. They're all working on the next phase of the problem.