Are you implying that the scenario where someone dies is strictly better if there was a human behind the wheel? Do you require a human to punish if an accident happens?
No, the implication is that there is that there are an enormous amount of edge cases that a human would appropriately respond to (i.e. no death) that the "A.I." cannot.
The rational part of my brain agrees, the parent in me does not, and I'm not sure either part is wrong.
Yes, I'm aware that a computer will statistically be a lot less accident prone than me (even though I'm completely accident free knock on wood). But even so, if my child died in an AI edge case where I know without a doubt I would have avoided the accident, it would matter a hell of a lot to me.
I'm not sure if it's right, but we will expect a lot more of AI drivers (and other) than we do of ourselves. Failing to see a stopped service truck in the middle of the highway isn't something a human does. And any death resulting from that will be highly scrutinized.
I expect a lot more from AI. Years ago I could play Quake III Arena and get pwned by bots.
I expect modern AI to recognize every car, pedestrian and physical object. I expect it to recognize all hostile targets and avoid them when feasible. I expect it to predict every possible move for both automated and manually controlled vessels and pedestrians. My AI needs to have an anti-gang mode that will protect the occupants even if that means acting as a weapon against hostiles. After all, people may be sending kids to grandma alone in the car (you know they will).
I expect all of this to occur without the need of network connectivity. My vehicle is to be prohibited from communicating without my express permission until after reviewing the data it wishes to upload.
I would like to add, if the vehicles entirely depend on their AI, then there should be at least 3 or 4 compute nodes and a voting system. Should a node fail or have false information, the other nodes should override.
I think, I agree that we should expect more from AI drivers. But how much more?
Even if they managed to decimate the number of fatal accidents, I'm sure that a large portion of people would still be really irrational about it and that worries me a lot
> I'm sure that a large portion of people would still be really irrational about it and that worries me a lot
I used to think that. The Tesla AutoPilot accidents have inspired no such reaction, though. The moment might come more when a self-driving car is used by a terrorist or school shooter as a getaway vehicle than because of someone getting mowed down by an errant algorithm, the latter being a cause of death we've been relatively numbed to.
Even if the cumulative amounts of fatal accidents is reduced, certain types of accidents happen less, if different types of accidents happen more often, say accelerating into a wall uncontrollably, that is not optimal. I worry that we will accept 'slightly better' as good enough, along with the consequences that come with it.
At some point, economics and actuaries come into play. It will be worrying when 'AI is safer than humans!' can be pointed to as a reason to not correct expensive, perhaps intractable, AI failures.
That’s all in a hypothetical future in which autonomous vehicles work as intended, are well tested, and are better than people. We’re still nowhere near that kind of lvl 5 automation, and just testing on open roads.
Be careful of the utopia trap: “anything is acceptable now for the rosy future imagined.”
Absolutely, you're right. But as a singular entity, I'm a human, and an AI is "other". I know I perform better in 2 cases, and AI statistically perform better in 100 others, but I believe I'm more special than statistics in those 100 other cases, so I only care about the cases where I could have made a saving difference.
I guess I'm suggesting there's bound to be a (n irrational) paradox where everyone wants other drivers to be replaced by AI, so they themselves can drive safely and avoid the edge case accidents.
I’m looking forward to having a self driving car. You value control so much that having amortized years of your life back isn’t worth it? How many hours of your life have you and will you spend driving when you could have been surfing the web, talking to loved ones, reading, eating or literally anything else?
The problem is not self driving cars but the unsafe driving cars, you know the move fast and brake things, self driving AI is not ready yet, when it will be ready and safe I will accept them, I wish we get safety at the level that NASA or airplanes not the safety we have in web apps and regular software.
It could easily be the other way around. Right now cars are murder machines. Your child could be hit and killed by a human driver where an AI would have avoided the accident.
Or a self driving bus with children could get confused by a weird sticker and jump of a bridge, hit a giant truck stopped in the middle of the road.
We are speculating here, we need numbers to see if this self driving cars are better, how do we get this numbers though? Is it fair for the citizens of a city to be forced to be part of this tests?
Are we sure the correct number of incidents are reported from this tests? are the tests good enough for real world (are they testing in all conditions are avoid some streets or some weather to keep the numbers looking good)
They're testing in California. I.e. flat-ish, wide roads, relatively nice weather. Sure, it's ideal for baby steps, but passing this off as "look at our safety record, therefore it's safe anywhere" is disingenuous at the very least.
Humans cause accidents in certain cases, mostly by carelessness.
AI causes accidents in certain cases, mostly by its stupidity.
These are different cases they fail in. You shouldn't be comparing a driverless AI to a unaided human, but a human aided by the driverless AI acting as a driving aid (you know, like automatic emergency breaking). this way you get the best of both worlds, the attentiveness of the AI to dumb things, and the smartness of humans.
I am sure a driverless AI if good enough won't beat a human aided by the same driverless AI acting as a "autopilot/supervisor"
This would mean that the car companies won't invest in improving because they have less accidents then some number, maybe they can put some money into lobby so that number is the largest number possible obtained with some statistic gymnastics.
Than number of car crashes is not the same in all countries,states or regions so I would hate that a safe city will be less safe in the future because of this.
> This would mean that the car companies won't invest in improving because they have less accidents then some number, maybe they can put some money into lobby so that number is the largest number possible obtained with some statistic gymnastics.
I really really doubt that! The car manufacturers will fight tooth and nail to be seen as the company providing the safest cars
Them competing to be the safest means that none of them are as safe as they could be working together, though.
If every company has their own secret suite of test cases then different companies can specialize in different aspects of safety, and different AIs will be tuned to watch for different conditions.
Imagine if instead of that they all worked together to define a rigorous test suite. Then they would all be striving to excel against all of the tests that the best of them could come up with. Wouldn't that result in more rigorous testing than any individual company would do? Especially if the results of all of the tests were public?
To go another step further, imagine an open carAI platform that had the aforementioned test suite and a full simulation platform for testing changes, with different car manufacturers represented on a committee that oversees the carAI platform. Separate the smarts from the base car a bit and have some sort of abstraction layer between the smart bits and the car bits. As long as the abstraction layer is configured properly then different AIs would be interchangeable/upgradeable on the same base hardware. All car companies (and tech companies, and interested individuals) could collaborate on building the best, most efficient, safest car AI possible. People on older hardware would get all the same safety improvements as people on newer hardware (though hardware improvements would obviously improve things like sensor quality and quantity and the like), there wouldn't be fragmentation between ai ecosystems with poorer people trapped on older releases with lower safety standards while the rich get the latests and greatest and safest cars, etc.
Obviously competition is better than nothing, but is it really better than an open, collaborative alternative?
I agree, I also think this self driving component should be standardized, maybe more then one standard but you should be able when you buy the car to decide if you want the AI or not, and if you want it to chose the AI package from company X,Y or Z.
Maybe making the component open source would be the best for the citizens.
Yes, competition is better than collaboration. Collaboration gets bogged down in committees, with each shareholder trying to protect their turf. Competition leads to improvements as companies try to find an edge/advantage over their competitors.
>I really really doubt that! The car manufacturers will fight tooth and nail to be seen as the company providing the safest cars
We seen competition not working in many sectors, like ISPs is US, operating systems for desktops or mobile phones...
IF the self driving component in the car could be swapped so you can buy say Tesla but get the self driving package from BMW or from Google then that could help competition, but without this if all companies have a similar failure rate then they can concentrate on competing on marketing,horse power, price, efficiency and not adding some extra safety, at least until some important person or a big number or children get killed by a very stupid issue like a car hit a wall because it did not see it because it got confused by some drawings on the wall.
I don't think car manufacturers is one of those sectors. Even if you take into account the periodic scandals (emission test, having to recall cars because they were literally killing people[0], etc) that pop up, cars have gotten massively safer over the past 50 years
The market doesn't bear this out with current cars, so why should autonomous cars be any different? Cars are marketed as powerful, sexy, vehicles that imbue with strength to traverse the harshest conditions in the worst weather. Macho vehicles. The only car maker that seems to have a sterling reputation for safety is Volvo, and they're a niche manufacturer.
On a larger utilitarian scale sure, but once a driverless car kills a person many legal and ethical issues will arise that you can't just brush off with that argument.
I am considering the case where a company would refuse to upgrade the sensors(or the software) if this would costs say 25% more then paying for the killed people.
I think the point is that the car would not have been in traffic otherwise, thus avoiding any death.
Though it does lead to interesting questions about whether the deaths that may occur now are acceptable if traffic deaths are reduced in the long term as a result of the testing.