the thing about waymo is that i suspect they're running the same ML fraud that tesla itself is running in the silicon valley in general, which is to overfit on the 20% of situations that occur 80% of times.
for waymo itself, you can overfit on 100% of the situations that will be encountered. 49 square miles isnt that large. its the real world outside that which im concerned about its efficacy in. i think if you put a waymo in a small town that no alphabet engineer has ever even heard of, then youll see it fail badly as well.
FSD is a reinforcement learning problem, and we have no good way of training non-simulation algos for that. and a real dynamical driving environment cant be simulated accurately enough
Waymo works in a remarkable range of situations. I took a waymo in LA and our route came through an awkward four-way intersection at the crest of a hill on a residential street. Another driver went through the stop when we had right of way, saw us and then just stopped, completely blocking our side of the road. The waymo just backed up a couple of yards and then slowly went round the wrong side and proceeded on its route. That is, in a weird situation it did exactly what a good, cautious human driver would do. Small sample, but it makes me think they are not doing what you say, they are just actually trying to approach the problem seriously rather than Tesla’s “full speed and damn the torpedoes” approach.
It's hilarious to see Tesla fans try to act like designing for an undefined operational domain is somehow extra brilliant and not one of the stupidest fucking ideas anyone has ever come up with.
San Francisco, Phoenix and LA represent a strong diversity of driving conditions. Certainly not all driving conditions, but no one is throwing a Waymo into a small town in the way you describe. Expanding slowly and cautious seems like the rational thing to do, I’m not clear what you are proposing as an alternative (or specifically what the alleged fraud is).
> San Francisco, Phoenix and LA represent a strong diversity of driving conditions.
This could very well be true, but if you’re looking at it from a perspective of someone who lives in a rural area with real winters, for driving purposes, those all look like pretty much equivalent large American cities without a winter.
Not sure where you're getting your information from.
My FSD (v13.2) has driven unmapped roads, including gravel roads, hills, narrow roads, and switchbacks, in the backwoods of Tennessee. From watching the display, it clearly identifies the road features and navigates them.
for waymo itself, you can overfit on 100% of the situations that will be encountered. 49 square miles isnt that large. its the real world outside that which im concerned about its efficacy in. i think if you put a waymo in a small town that no alphabet engineer has ever even heard of, then youll see it fail badly as well.
FSD is a reinforcement learning problem, and we have no good way of training non-simulation algos for that. and a real dynamical driving environment cant be simulated accurately enough