Humans also don’t have 8 eyes facing every direction at all times. They also get drunk/tired/impatient/angry etc. The reality is the entire argument is silly. Both are very different and Musk/Karpathy argument is misrepresented here. Saying humans only use vision was a response to “its not possible with only vision” not a statement that human vision is good enough and no need to do better. The 8 camera surround is leaps better than human vision. Where they lack is processing the signal. Human brain does that better. But if you have better inputs (we do already) and you believe you can one day match on the processing part, you’ll one day get a much better result. One thats suited to the vision based roads we have now and scales to literally anywhere not geo constrained like Waymo
Indeed, but humans also have an incentive to drive well, embodied by local traffic police and local laws, and even before passing their driving test they're made aware of the penalties for not driving well (which, let's remind ourselves, range from "mild ticking off"/"pay $$$" through "forfeit driving licence for a time" all the way to "forfeit liberty for a time")
Where are these incentives for self-driving algorithms?
If your algo breaks the law to a sufficient level, is someone(something?) prevented from driving for a time? Is that really going to be just that one vehicle, or should it be all vehicles with that same algo? If something really bad happens, who is charged; in the worst case, who might end up going to jail?
We all know CEOs tend to believe "this time it's different", that they're special, and that the annoying rulebook is to be viewed as guidance at best. VW/Martin Winterkorn, anyone?
> Where are these incentives for self-driving algorithms?
Surely the equivalent is the reward during training?
> If your algo breaks the law to a sufficient level, is someone(something?) prevented from driving for a time? Is that really going to be just that one vehicle, or should it be all vehicles with that same algo? If something really bad happens, who is charged; in the worst case, who might end up going to jail?
Personal opinion:
Algorithm should learn from fleet and should be shared by fleet; therefore all accidents should be treated like aircraft crashes and investigated extremely thoroughly with a goal of eliminating root cause.
If that cause was CEO demanding corners be cut to boost shareholder value then jail them; if it's that the algorithm had, say, never seen a flying shark drone[0] before, and misclassified it as a something it needed to take evasive manoeuvres to avoid and that led to a crash, then perhaps not (except anything I suggest probably should be in their list of things to check for, so even then perhaps it would still be a CEO-at-fault example…)
> Surely the equivalent is the reward during training?
Surely the counter-example to when a self-driving vehicle drives straight into a stationary fire truck?[0]
If a human driver did this more than once (and lived to tell the tale!) - yet had no explanation other than "Of course I saw it, but I wasn't sure what it was and didn't realise I needed to avoid hitting it it <shrug>" - wouldn't they lose their driving licence fairly quickly?
You asked for the incentives for AI; the equivalent isn't the same as for humans.
The nature of the AI doesn't include a concept of prison or licensing, so it can't be threatened with it, for the same reason I can't threaten a human driver with Af'nek-leigh D'Och entRah'negh.
I can however 'punish' (air-quotes necessary because it might not feel like anything) an AI by altering the weights and biases of its network — once done, it then thinks differently.
Don't anthropomorphise it, that's a category error.
Also, the field of "how does it even?" is tiny, which is itself a reason to not grant them control of vehicles, but that's a separate issue.
> You asked for the incentives for AI; the equivalent isn't the same as for humans. The nature of the AI doesn't include a concept of prison or licensing, so it can't be threatened with it [..]
There certainly should be incentives for the humans creating an AI, though.
> Don't anthropomorphise it, that's a category error.
Volkswagen [human!] engineers created the illegal defeat devices in Dieselgate, under the supervision of their [human!] managers. The device is illegal, we punish the humans in charge when laws are broken, not the devices themselves. It should be the same with AI.
If this means software engineering becomes a field where you need mandatory liability insurance to work on AI, is that a bad thing?
In the glorious words of Stelios Haji-Ioannou, "If you think safety is expensive, try [having] an accident"
A camera that is actually better than the human eye is pretty difficult to find, and they cost around ~2000$ each, and even then you'll have worse peak resolution in the day and worse motion characteristics at night. Human eyes are pretty good!