So it's not just "don't drink and drive", knowing that they'll probably reoffend soon anyway. Every crash and especially fatality can be thoroughly investigated and should be prevented from ever happening again.
Hopefully there's enough data in the investigation so that Tesla / Waymo and all other car companies can include the circumstances of the failure in their tests.
Whoever owns the algorithm. Or at-least in whoever's name the license/permission was issued. If its an organization, the top management signing off on this has to take the blame.
>> It's physically impossible for a human to intervene on the timescales involved in motor accidents.
That remains true even without any automatic driving tech - you are responsible even for accidents which happen too quickly for anyone to intervene. Obviously if you have some evidence(dashcam) showing that you couldn't avoid the accident you should be found not guilty, but the person going to court will be you - not the maker of your car's cruise control/radar system/whatever.
The assists are certainly helping more than anything, so I feel that the Mazda is much safer to drive in heavy traffic than the older Outback.
The cruise has autonomy over controlling the speed only, and applying brakes, but it is still autonomy. Of course since my hands never leave the wheel it may not fit with what you have in mind.
Having said that, Mazda (or Bosch?) really nailed their radar, having never failed to pick up motorbike riders even though the manual warns us to not expect it to work.
I feel more confident in a system where the ambition is smaller, yet execution more solid.
Fwiw I also tested the AEB against cardboard boxes driving through them at 30km/h not moving accelerator at all, and came away very impressed by the system. It intervened so last second I felt for sure it wasn't going to work, but it did - first time was a very slight impact, next two were complete stops with small margins.
This stuff is guaranteed to save lives and prevent costly crashes (I generally refuse to use the word "accident") on a grander scale.
I do love that even though they have a ton of driver alerting features, hands have to be on the wheel at all times.
Either you have full autonomy without hands or you don’t. There is no middle ground, it’s a recipe for a disaster.
There are several states is USA that are more progressive than others (CA namely). But with many working groups in and around the legal side - it hopefully will be a thing of the past.
In Australia, they are mandating by some year soon (don't have it on hand) that to achieve a Safety Rating of 5 star, some level of automation needs to exist. Such as lane departure or ABS will become as standard as aircon.
Let me put it this way - my Mercedes has an emergency stop feature when it detects pedestrians in front of the car. If I'm on cruise control and the car hits someone, could I possibly blame it on Mercedes? Of course not. I'm still the driver behind the wheel and those systems are meant to help - not replace my attention.
What we have now in these semi-autonomous vehicles is nothing more than a glorified cruise control - and I don't think the law treats it any differently(at least yet.).
Now, if Uber(or anyone else) builds cars with no driver at all - sure, we can start talking about shifting the responsibility to the corporation. But for now, the driver is behind the wheel for a reason.
The San Francisco Chronicle late Monday reported that Tempe Police Chief Sylvia Moir said that from viewing videos taken from the vehicle “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway." (bit.ly/2IADRUF)
Moir told the Chronicle, “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident,” but she did not rule out that charges could be filed against the operator in the Uber vehicle, the paper reported.
On top of the speedometer it has the GPS speed to compare as well, I can't see how there is any excuse for being over the limit.
The quoted stats from UK advertising were that at 40mph 80% of pedestrians will die from the crash, at 30mph 20% will die.
Had the car been doing just under the limt e.g. 33mph then there's a much better chance that the woman would have survived.
[30 January 2018]
Can you explain why you think this rule changed a decade ago?
Because sometimes that 10% is argued as a margin of error for humans supposedly not paying attention how fast they're going, but if that's the case then there's really no reason why the robot shouldn't drive strictly under the speed limit.
If you explicitly programmed a fleet of robots to deliberately break the law, then I think it's not enough consequences if you just fine for the first robot that gets caught breaking that law, while the programmers adjust the code of the fleet to not get caught again.
Consequences should be more severe if there's a whole fleet of robots programmed to break the law, even if the law catches the first robot right away and the rest of the fleet is paused immediately.
> Chief of Police Sylvia Moir told the San Francisco Chronicle on Monday that video footage taken from cameras equipped to the autonomous Volvo SUV potentially shift the blame to the victim herself, 49-year-old Elaine Herzberg, rather than the vehicle.
> “It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.
Based on the layout of the presumed crash site (namely, the median had a paved section that would effectively make this an unmarked crosswalk), and based on the fact that the damage being all on the passenger's side (which is to say, the pedestrian would have had to have crossed most of the lane before being struck), I would expect that there is a rather lot that could have been done on the driver's side (whether human or autonomous) to avoid the crash.
> Herzberg is said to have abruptly walked from a center median into a lane with traffic
So that explains that. However, contrary to the thrust of your argument, the experience of the sober driver, who was ready to intervene if needed, is hard to dismiss:
> “The driver said it was like a flash, the person walked out in front of them,” Moir said. “His first alert to the collision was the sound of the collision.”
No human could brake that well, and simply jamming the brakes would engage the ABS leading to a longer stopping distance. Not to mention the human reaction time of 0.5 - 0.75 seconds would have prevented most people from even lifting the foot off the accelerator pedal before the collision, even if they were perfectly focused on driving.
I was taught that the entire point of ABS is so that you can just jam the brake and have the shortest stopping time instead of modulating it yourself to avoid skidding. Do you have any source to the contrary?
Older dumb ABS systems would simply "pump the pedal" for the driver, and would increase stopping distance in almost all conditions, especially single-channel systems. Newer systems determine breaking performance via the conventional ABS sensors and additionally accelerometers. These systems will back off N G's, then increase the G's bisecting the known-locked and known-unlocked condition, trying to find the optimum. These systems _will_ stop the car in the minimum distance possible, but very few cars use it.
Wikipedia backs me up but adds that it also decreases stopping distance on dry and slippery surfaces, while significantly increasing stopping distances in snow and gravel. I’m from a country with a lot of snow so that makes sense.
In terms of split second reactions, it's pretty much optimal still to just jam the brakes if you have ABS. It's much better than braking too little, which is what most non-ABS drivers would have done.
That said, the point of ABS is in the rare event that you have to brake full power, the system automatically help you do it at a near optimal (slightly skidding) without additional input, and you remain full steering ability.
If you don't have ABS you'd need to train that emergency stop ability on a daily basis to even come close.
Many cars, such as my POS Ford Focus, use a single-channel ABS system. These systems will oscillate all four brakes even if only one is locked. Combined with the rear-wheel drum brakes, the ABS considerably increased stopping distances on dry road.
Maybe it's society's fault, for building open-access roadways where vehicles exceed a few km/h.
As others have said in the comments, whole point of having technology is defeated if it performs worse than humans. Assuming vehicles were parked, a sane human driver will evaluate the possibility of someone coming out from between them suddenly and will not drive @40 Miles an hour speed.
If that's the case most drivers on the road are very far from "sane drivers." I've been illegally passed, on narrow residential streets, many times, because I was going a speed that took into account the fact someone may jump out between parked cars.
> she came from the shadows right into the roadway
also we were told radars would have have solved exactly this limitation of humans
> Uber car was driving at 38 mph in a 35 mph zone
also we were told these car would be inherently safer because they would always respect limits and signage
> she is said to have abruptly walked from a center median into a lane with traffic
I don't know other driver, but when someone is on the median or close to the road I usually slow down in principle because it doesn't match the usual expectations of a typical 'safe' situation.
I've been advocating against public testing for a long time, because it's just treating people safety as an externality. Uber is cutting corners, not all company are that sloppy, but this is, overall, unacceptable.
This means that there's a fixed distance from which the optimal driver can stop a car doing xmph. Yes, an autonomous vehicle has a faster reaction time* to begin the stop, but no matter the reaction time, a stop cannot be instantaneous from any substantial amount of speed.
If it takes 20 feet to stop a car doing 20MPH, it will take 80 feet to stop a car doing 40mph. If there's a human between the initial brake point and 80 feet from it, that human will be hit, no matter who or what the driver is.
Or maybe it couldn’t, but then the whole « narrative « of the experiment is in serious jeopardy.
Not really. Self driving cars are supposed to be better than average human driver. That does not imply that they NEVER make mistakes.
I do not know specifics of this case, but a general comment: If somebody is hiding behind a bush, and (deliberately or by mistake) run in front of the car, there is no way the car can anticipate that. There is no way to avoid accidents in 100% of the cases.
I can think of many situations where I have avoided hitting pedestrians because of my avareness of the situation. Eg: Pedestrian with earphones looking at phone crossing against red light just because the left-turning wehicle in the left lane stopped for a red arrow while I had green going straight. Pedestrian mostly behind the car, just seen thru the window of the car.
Pedestrian behind high snow-walls going towards normal pedestrian crossing, no lights. Almost completely covered by the high snow-walls and a buss parked at a buss station 50 m away from the crossing. 50 km/h road. Since I had seen the pedestrian far away already I knew someone would show up there at the time I arrived there. On the other hand I would never pass a buss like that in high speed, pedestrians like to just run across in front of the buss. And high snow-walls next to a crossing is a big red flag too.
I live in Sweden though, where pedestrians are supposed to be first class citizens that has no armor.
This looks like the exact situation the selfdriving cars are supposed to be able to avoid. A big object in the middle of the street. I expect the car to try to avoid this even though the bike didn't seem to have any reflexes. If the LiDAR doesn't catch this, I don't think they should be out in traffic at all.
Yes, but this is a 4-lane roadway. I can totally imagine driving cautiously and slowing down near residential areas where houses are close to the road. However, this seems like a different case.
It, or the driver, could do more than just stop though. You can change directions, even at 38mph.
Then we have to get into other questions, would I as a driver willingly sideswipe a car next to me to avoid hitting a pedestrian? Is it reasonable to expect an AI to make the same value decision?
An algorithm however is a different deal, what should happen is already decided in an algorithm, so in some way its already settled who gets killed.
You can't avoid what is hiding.
That would increase the chance a particular vehicle could avoid an accident which it couldn't, on its own, anticipate.
It would be much safer if all roadways with speed limits over a few km/h were fenced, with tunnels or bridges for pedestrian crossing. Arguably, we would have that now, but for legislative efforts by vehicle manufacturers many decades ago. Maybe we'll get there with The Boring Company.
"Most people" (which is, in reality, "most of my geek friends with high disposable income") shifts to "everyone" by the end of sentence. Also, my devices seem to know where I am...within a few city blocks: I do not like your requirement of always-on mandatory tracking, both from privacy and battery life POVs.
Even worse, this has major false negatives: it's not a probation tracker device - if I leave it at home, am I now fair game for AVs? And even if I have it with me and fine position is requested, rarely do I get beyond "close to a corner of X an Y Street," usually the precision tops out at tens of ft: worse than useless for real-time traffic detection.
Moreover, your proposal for car-only roadways is only reasonable for high-speed, separated ways; I sure hope you are not proposing fencing off streets (as would be the case here: 50 km/h > "a few km/h", this is the usual city speed limit).
Some do argue that speed limits for unfenced roadways within cities ought to be 30 km/h or less. And although fatalities are uncommon at 30 km/h, severe injuries aren't. I live in a community where the speed limit is half that. But there's no easy solution, given how prevalent motor vehicles have become. Except perhaps self-driving ones.
Accidents can still occur even if the car is perfect.
Not saying this is the case here, but it could be. As others have said, we need to wait until we know more before jumping to conclusions.
Thinking about it seriously, it may be shouldn't. Also these things could lead to a crash pile up with other vehicles coming in from behind.
Also my problem with this is that a human death is functionally treated as finding edge cases that are missing a unit test, and progressing the testing rate of the code... and that really bothers me somehow. We need to avoid treating deaths as progress in the pursuit of better things
Au contraire. Go read building codes some time. There's a saying that they're "written in blood" - every bit, no matter how obvious or arbitrary seeming, was earned through some real-world failure.
The death itself isn't progress, of course. But we owe it to the person to who died to learn from what happened.
Or, you know, don't jump on comments for their not explicitly addressing the hobby horse you're riding. Frankly I just wanted to express a better engineering context around the loss of life without getting into the political bullshit for once.
This implies they're using the same systems and underlying models. If one model hit a pedestrian because of a weakness in training data plus a sub-optimal model hyperparameter, and therefore was classified a pedestrian in that specific pose as some trash on the street, how do you share that conclusion with other companies models?
I don't think all self-driving cars have the same sensors.
If you have LIDAR model X, and they were using LIDAR model Y, will your system "magically figure it out?
If your car has cameras at 5ft high, and the data is from a camera 6ft high?
Sure, someone could release the data, but will it screw up your models more than it fixes them?
(I totally agree the data should be released, I'm just not sure other self-driving cars will directly benefit. Certainly they can indirectly benefit from it.)
The idea you present is possible, but I have to wonder how viable it makes the idea of self driving cars.
It may, it also might not. If sharing data fails, further methods would be needed - but it's a good start for figuring out what data should be recorded for comparison. If the data is entirely incompatible, then we should have regulation to require companies to at minimum transcribe the data into a consumable format after an event such as this.
> ... if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.
If the models saw anything at all, they should not have driven over it. Even human drivers are taught that driving over road debris is dangerous. At minimum it puts the car/occupants at risk - in extreme cases, the driver may not recognize what it is they are driving over.
If this isn't a case where the car was physically unable to stop - it's more likely the telemetry didn't identify the person as an obstacle to avoid at all.
The great thing about computer generated test cases is you don't need real footage of every hypothetical awful thing that could happen i.e. a truck losing control and rolling over sideways. These could be a stage 1 test -- a pre-requisite to real testing. Like the hazards test before you are allowed to sit the drive test with a real drive test employee  to make sure you're not a retard.
That being said, when the first real cars were introduced to the world and later improved upon there were many more fatalities than we're likely to experience with self-driving technology.
Every technology is dangerous. Every technology costs lives to some extent when spread across billions of people. I'm sure forks take more lives each year than self driving cars.
Weigh this against the potential lives saved. I posit you'd be killing more people with drunk drivers by slowing innovation than you'd be saving via luddism.
What's more, a lot of people seem to think Waymo's tech is further along. So not only does this project have to succeed, it is also the underdog. So no wonder they're aggressive.
We all make similar decisions all of our lives, but nearly always at some remove. (With rare exceptions in fields like medicine and military operations.) But autonomous vehicles are a very visceral and direct implementation. The difference betweeen the trolley problem and in autonomous vehicles is in time delay and the amount of effort and skill required in execution.
Plus, we're taking what is pretty clearly a moral decision and putting it into a system that doesn't do moral decision-making.
There's also NHTSA (which is indeed part of the DoT).
It looks to me like we don't need any new agency at all, just a very small expansion of NHTSA's mandate to specifically address the "self-driving" part of self-driving cars.
Then by all means lets stay at home and avoid putting humans in rockets ever again because if you think space exploration will be done without deaths you are in for a surprise.
The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News". I'm one-third a class M planet away from the incident and reading about it.
They are rare only because self-driving cars are; I don't think the total driven miles of all self-driving cars are enough that even 1 fatality would be expected if they were human driven; certainly Uber is orders of magnitude below that point taken alone.
There are lots of fatalities from human-driven cars, sure, but that's over a truly stupendous number of miles driven.
There's different estimates from different sources using slightly different methodology, but they are all in the neighborhood of road fatalities of 1 per 100 million miles traveled. 
Waymo claims to have reached 5 million miles in February , Uber (from other posts in this thread) is around 1 million miles; the whole self-driving industry is nowhere near 100 million, and has one fatality. So it's way worse, as of today, than human driving for fatalities.
Of course, it's also way too little data (on the self-driving side) to treat as meaningful in terms of prediciting the current hazard rather than simply measuring the outcomes to date.
 see, e.g., https://en.m.wikipedia.org/wiki/Transportation_safety_in_the...
Does the NTSB have regulation on how black boxes are allowed to function?
Would be better if the code and data was open for public review.
Making it public means more eyes on the data, which can lead to a better understanding of what went wrong.
This comment by Animats in the context of self-driving trucks is quite telling. He warns precisely of this danger.
Take a look at the cost of airplanes, even small ones.
As we understand more about the risks associated with autonomous driving, we should expand and enrich this data-set, and to ensure public safety, testing against such a dataset should be part of NHTSA / Euro NCAP testing.
I.e. NHTSA and Euro NCAP should start getting into the business of software testing.
I think the idea is to build a "Traincar of Reasonability" to test future autonomous vehicles with.
You might want to check out her research https://hal.pratt.duke.edu/research
I'm sure Dr. Cummings would be more than happy to talk about issues facing validation and verification in the context of NHTSA/FAA.
They were unwilling to legally obtain a self-driving license in California because they did not want to report "disengagements" (situations in which a human driver has to intervene).
Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.
This is a strange question to ask. The regulation is not there to benefit Uber, it is to benefit public good. Very few companies would follow regulation if it was a choice. The setup of such regulation would be for it to be criminal to not comply. And if Uber could not operate in California (or the USA) if they did not comply, it would in their interest to provide the requested information.
Essentially, Uber engages in regulatory arbitrage but taking into account the cost-benefits of breaking the law. I.e. if it breaks the law but is profitable for them, they seem to do it.
My 'money' is on people with money figuring out loopholes, like plausible deniability.
One of my biggest complaints about the self-driving car space is that real lives are at stake; light-touch "voluntary" rules suitable for companies that publish solitaire clones aren't going to cut it here.
Uber doesn't get to pick and choose what regulations they wish to follow.
> Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.
That sounds quite negligent and a cause for heightened repercussions if anything happens.
The strange attitude you display is the _reason_ there are regulations.
I can’t say anything about GM‘s rep in the US, but here in Europe they are not doing so well. Chevrolet was killed in 2015, and Vauxhall/Opel are doing only ok-ish. Chevy had SO many recalls in the years before they killed it.
Uber already has a terrible reputation with everyone in the tech industry for the sexism, bullying, law breaking, and IP theft. Do they really want to be the self-driving car company with a reputation for killing people?
It doesn't take a lot for people to think "Maybe I'll take a lyft" or "Maybe I'll ban uber from London because of their safety record".
They aren't going to kill the market for this - the other players not only have big incentives to make sure they look safe, but you've got a really unique problem when your biggest competitor is a company that controls access to news and which adverts your customers will see.
It’s the only real solution to corporations misbehaving.
The only working regulation is one that is a existential threat to the company. Which means- huge financial punnishments.
Still, corporations, while being essentially a different kind of life, are not entirely separate entities - they are composed of people. Fear of jail might just be enough to get some high-level executives to exert proper influence on the direction of the corporation.
Enron's founder, Kenneth Lay, was also convicted and faced as much as life in prison. However, he died before sentencing: http://www.nytimes.com/2006/07/05/business/05cnd-lay.html
As well as some core set of tests that define minimum competence, these tests could include sensor failure, equipment failure (tire blowout, the gas pedal gets stuck, the brakes stop working) and unexpected environmental changes (ice on the road, a swerving bus).
Manufacturers could even let the public develop and run their own test cases.
It is more an issue of how sophisticated these vehicles should be before they're let loose on public roads. At some stage they have to be allowed onto public roads or they'd literally never make it into production.
If they're not willing to dogfood their own potential murder machines then why should the public trust them on the public roads?
So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.
Moreover, the human brain won't like processing these freak accidents. People die in car crashes every damn day. But we have become really accustomed to rationalizing that: "they were struck by a drunk driver", "they were texting", "they didn't see the red light", etc. These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".
But these algorithms will not fail like that. Each accident will be unique and weird and scary. I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road. It'll always be tragic, unpredictable and one-off.
Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road. The extent to which machine learning is used is to classify whether each obstacle is a pedestrian, bicyclist, another car, or something else. By doing so, the self-driving car can improve its ability to plan, e.g., if it predicts that an obstacle is a pedestrian, it can plan for the event that the pedestrian is considering crossing the road, and can reduce speed accordingly.
However, the only purpose of this reliance on the machine learning classification should be to improve the comfort of the drive (e.g., avoid abrupt braking). I believe we can reasonably expect that within reason, the self-driving car nevertheless maintains an absolute safety guarantee (i.e., it doesn't run into an obstacle). I say "within reason", because of course if a person jumps in front of a fast moving car, there is no way the car can react. I think it is highly unlikely that this is what happened in the accident -- pedestrians typically exercise reasonable precautions when causing the road.
It's a point of clarification that the originally listed study doesn't take into account, but which could be important to the broader discussion. Especially considering that while this vehicle had LIDAR, the other autonomous vehicle fatality case did not.
> as are basically all other prototype fully-autonomous vehicles
As I pointed out with examples above, no, they are not.
You can get depth sensing (time of flight) 2D camera Orbecc Astra for $150 or 1D laser scanner RPLIDAR for $300. Of course they are probably not suited for automotive, but for me even extra $2000 for self-driving car sensors isn't that much.
LIDAR is also not perfect when the road is covered in 5 inches of snow and you can't tell where the lanes are. Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.
With erratic input, you will get erratic output. Even the best ML vision algorithm will sometimes produce shit output, which will become input to the actual driving algorithm.
Neither are humans, and a self-driving car can react much faster than any human ever could.
Are you working on the next season of Black Mirror?
In all seriousness, my fear (and maybe not fear, maybe it's happy expectation in light of the nightmare scenarios) is that if a couple of the "weird and terrifying" accidents happen, the gov't would shut down self-driving car usage immediately.
Your fear is very much grounded in reality. US lawmakers tend to be very reactionary, except in rare cases like gun laws. So it won't take much to have restrictions imposed like this. Granted, I believe some regulation is good; after all the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation. But self driving cars are so new and our lawmakers are by and large so ignorant, that I wouldn't trust them to create good regulation from the get go.
They're still very reactionary in that, which is precisely why it isn't very effective when a subset of them do react: there are plenty of smart things that could get proposed, but the overlap between people who know what they're talking about and people that want the laws is exceptionally small, so consequently dumb, ineffective stuff that has no chance of passing anyway gets proposed. What does get proposed is a knee-jerk reaction to what just happened, and rarely actually looks systemically at the current laws and gun violence as a whole. Example: the Las Vegas shooting prompted a lot of talk of bump stock bans. Bump stocks are so rarely used at all, nevermind in violence, and they will generally ruin guns that weren't originally made to be fully-automatic very quickly if they're actually used for sustained automatic fire. Silly point to focus on suddenly. After the Florida shooting last month so much focused on why rifles are easier to obtain than handguns. And it's because overwhemingly most gun violence is handguns. Easily concealable rifles are already heavily regulated at the federal level for that very reason.
This is non-sense. Typical semi-auto are wayyy over build. Unless mechanical wear or explicit tempering of the disconnector, there is no risk whatsoever to fire thousands of rounds with a bump stock. Actually, plastic/wood furniture are more likely to burn/melt before the mechanical parts will actually fail. As worst, you might bend a gas piston, but the rifle will otherwise be fine.
The underlying reasoning behind the push against the bump stock ban is that it was basically a semi-auto ban, as you can trivially with a bit of training bump fire any semi-auto without a bump stock from either the shoulder or the hip with a mere finger.
Tubes on low- to mid- range civilian DE guns can burn out very quickly, and are in fact designed to do so long before you get damage to the more expensive parts of the gun - I've seen it happen in most of the cases (which are admittedly quite few in number despite how often I'm there) where I've seen someone using a bump stock at a range. In the most recent case I think the guy was on his 3rd mag and it ruptured. It was a M&P 15 Sport II, if I recall. Not a cheap no-name brand, but about as low-cost as you can get and missing all the upgrades in the version they market to cops. High-end ARs would fair better, I'd expect, but high-end ARs are again so rarely used for actual violence because they're usually only purchased by people shooting for a serious hobby in stable life situations. And honestly I feel the same people buying those probably feel bump stocks are tacky and gaudy like I do.
Even in the most liberal interpretation of the proposed law, I don't think any bump stock ban would become a semi-auto ban. I could see the vague language getting applied to after-market triggers, especially ones like Franklin Armory, but you've gotta have some added device for any of the proposals I've seen to even remotely apply.
It's amazing how reformers get demonized even after their platform is accepted wholesale by the rest of the world.
And for a good reason, they are constitutionally prohibited to .
All safety functionality was introduced and used way before regulators even knew that it's possible.
Edit: please explain the downvotes, ideally with examples
In Black Mirror, all cars in the world will simultaneously swerve into nearby pedestrians, buildings, or other cars.
Scary stuff. I hope these self-driving cars will be able and designed to work while completely offline, with no built-in way to ever connect to a network. But given the biggest player in the field seems to be Google, they'll probably be always connected in order to send data to the mothership and receive ads to show you.
Don't hold your breath about that. There will be a huge load of data ready to be sold to advertising companies just by listening what passengers talk about when passing near areas/stores/billboards/events etc.
I'm just envisioning a scenario where the car automatically pulls to the side of the highway, locks the doors and dishes you with a 15 second ad, and then the doors unlock and the journey resumes as normal.
Why? This is what the self-driving cars industry insists on, but has nowhere near been proven (only BS stats, under ideal conditions, no rain, no snow, selected roads, etc -- and those as reported by the companies itself).
I can very well imagine a greater than average human driving AI. But I can also imagine being able to write it anytime soon not being a law of nature.
It might take decades or centuries to get out of some local maxima.
General AI research had also promised the moon once again in the 60s and 70s, and it all died with little to show of in the 80s. It was always "a few years down the line".
I'm not so certain that we're gonna get this good car AI anytime soon.
The solutions to not killing people whilst driving aren't rocket science but too many humans seem to be incapable of respecting the rules.
I do think that self-driving cars will be safer, but it's upon it's proponents to prove that.
Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.
I further agree with you it's up to the proponents to prove that. It's a good thing to force a really high bar for self-driving cars. Then assuming the technology is maintained once AI passes the bar it should only ever get better.
Not if you put neural networks / deep learning in the equation. This stuff is black boxes connected to black boxes, that work fine until they don't, and then nobody knows why they failed - because all you have is bunch of numbers with zero semantic information attached to them.
The only questionable parts will be where the vision system fails, and those are similar actually to human problems. Because human vision also often fails (sunlight on windshield, lack of attention, darkness, etc.)
Are you in very vague words implying that AGI has been invented? AI might have matched humans in image recognition, but it is far away in general decision making.
And finally, I am tired of listening to "safer than a human". That should never be the comparison, but a human at the helm and an AI running in the background which will take over when the human does an obvious mistake -- you know, like a emergency braking system,
If those situations recur exactly as they happened the first time, sure they can be prevented from happening again.
That is, if a car approaches the exact same intersection as the exact same time of day, and a pedestrian that looks exactly like the pedestrian in this accident crosses the street in exactly the same way, with exactly the same other variables (like all the other pedestrians and cars around there that the sensors can see), the data could be enough the same that the algorithm will detect it at close enough to the original situation to avoid the accident this time.
But it's not at all clear how well their improvements will generalize to other situations which humans would consider to be "the same" (ie. when any pedestrian in any intersection crosses any street).
If self-driving cars are, at their best, roughly as capable as a human driver.
This is a big 'if'.
The solution to not killing people is a kind of rocket science. In fact, it's probably harder than rocket science. It's predicated on a lot of things that are very, very, very hard. The fact is that humans, who are already pretty capable of most of these very very hard things, often choose to reduce their own capabilities.
If the best self-driving tech is no better than a drunk human, however, then we haven't gained much.
 though perhaps not harder than brain surgery.
However, the process your describing, of collecting human-level performance data, requires the ability to gather all of the data relevant to the act of driving in a manner consumable by the algorithm in question. This is the simulation problem, and it's very, very, very hard (it's why genetic algorithms have traditionally not gotten much further than toy examples, in spite of being a cool idea). Perhaps it is the case that it is very important to have an accurate model of the intentions of other agents (e.g., pedestrians) in order to take preventative action rather than pure reaction. Perhaps it is very important to have a model of what time of day it is, or the neighborhood you're driving in. The likelihood that it is going to rain some time in the next hour. Whether the stock market closed up or down that day.
It also assumes that neural networks (or the more traditional systems used elsewhere) are sufficiently complex to model these behaviors accurately. Which we do not yet have an answer to yet.
So, when I say, 'a big if', I mean for the foreseeable future, barring some massive technological/biological breakthrough. That could be a very long time.
Because machines have orders of magnitude fewer failure modes than humans, but with greater efficiency. It's why so much human labour has been automated. There's little reason to think driving will be any different.
You can insist all you like that the existing evidence is under "ideal conditions", but a) that's how humans pass their driving tests too, and b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.
It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.
Repetitive, blunt, manual labor now, and probably much basic legal/administrative/medical work in the near future. But we still pay migrant workers to harvest fruit, and I don't imagine a robot jockey winning a horse race anytime soon.
Driving a car under non-ideal conditions is incredibly complex, and relies upon human communication. For example: eye contact between a driver and pedestrian; one driver waving at another to go ahead; anticipating the behavior of an old lady in an Oldsmobile. Oh, the robots will be better drivers eventually, but it will be awhile. We humans currently manage about one death per hundred million miles; Uber made it all of two million. I expect we'll have level 5 self-driving cars about the same time we pass the Turing test.
Harvesting fruit is far more complex than driving. It's a 3D search through a complex space.
> Driving a car under non-ideal conditions is incredibly complex, and relies upon human communication.
No it doesn't. The rules of the road detail precisely how cars interact with each other and with pedestrians.
> We humans currently manage about one death per hundred million miles; Uber made it all of two million.
Incorrect use of statistics.
Are you making a joke? "a 3D search [for a path that reaches the destination safely and legally] through complex space" is exactly how I would describe driving. (Also, driving is an online problem.)
I mean, yeah the topology is not as complex as a pure unrestricted 3d space but it's also more complex than pure 2d space. It's a search through a space, and it's complex, I don't know if nitpicking about the topology adds a lot here?
Even navigational paths that consider all of the junctions, ramps, etc. are simply reduced to a weighted graph with no notion of any dimensions beyond forward and backwards.
>It's why so much human labour has been automated.
But how much human labour that is as complicated as driving has been automated? As far as I can tell automation is very, very bad when it needs to interact with humans who may behave unexpectedly.
>b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.
>It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.
Actually plenty of experts within the field disagree with you.
“I tell adult audiences not to expect it in their lifetimes. And I say the same thing to students,” he says. “Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation.”
Steve Shladover, transportation researcher at the University of California, Berkeley
With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard. But guess what? To solve the real problem, for you or me to buy a car that can drive autonomously from point A to point B—it's not even close. There are fundamental problems that need to be solved.
Herman Herman, Director of the National Robotics Engineering Center @ CMU
Quite a lot actually.
These days you can produce food for several thousands of people using a few hundred people and plenty of machines.
Part of the reason why we haven't yet reached a Malthusian catastrophe is this.
Automated driving is more like a fully automated chef, that can create new dishes from what his clients tell him they like. Without the clients being able to properly express themselves. That's a lot more complicated than following a recipe.
Difficulty of automation goes roughly trains < planes << cars.
Automated trains are simple, but don't provide much value. Automating planes provided value because it's safer than just with human pilots. Automated cars are a different league of complexity.
Driving is not complicated at its core. Travel along vectors that intersect at well-defined angles. Stop to avoid obstacles whose vectors intersect with yours.
Sometimes those obstacles will intersect with your vector faster than you can stop, which is probably what happened to this woman. As long as the autonomous car was following the prescribed laws, then it's not at fault, and a human definitely would not have been able to stop either.
> Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting.
Which is why self-driving cars don't depend on visual light, and why prototypes are being tested in regions without inclement weather. Being on HN, I'm sure you're well familiar with the product development cycle: start with the easiest problem that does something useful, then generalize as needed.
> With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard.
Right, so the experts agree with me that the problem the pilot projects are addressing is readily solvable, and that general deployment will take a number of years of further research, but isn't beyond our reach. This past year I've already read about sensors that can peer through ice and snow. 15 years is not at all out of the question.
If a ball bounces in front of me, I slow down expecting a dog or a child running after it. No self driving car now, and in 30 years is going to be able to infer that.
Driving is essentially interacting with the environment, reading hand signals from people, understanding intent of pedestrians, bicycles and other drivers. No way any AI can do that now.
Trains travel along a straight line, not a vector in 2D space.
> If a ball bounces in front of me, I slow down expecting a dog or a child running after it. No self driving car now, and in 30 years is going to be able to infer that.
Incorrect. I don't know why you think humans are so special that they're the only system capable of inferring such correlations.
If self-driving cars really are safer in the long-run for drivers and pedestrians - maybe what people need is a better grasp on probability and statistics? And self-driving car companies need to show and publicize the data that backs this claim up to win the trust of the population.
If the road was filled with self-driving cars there would be less accidents but I wouldn't understand them and with that comes distrust.
Freak accidents without explanations are not going to cut it.
Also, my gut feeling says this was a preventable accident that only happened because of many layers of poor judgement. I hope I'm wrong but that is seriously what I think of self-driving attempts in public so far. Irresponsible.
Perhaps, similar to airline crashes, we should expect Uber to pay out to the family, plus a penalty fine. 1m per death? 2? What price to we put on a life?
This is a tough problem, but if we stop or limit the allocation of resources to protect the secretive intellectual property that is autonomoilusly running people over, that is the most effective incentive I can see. Plus it's pretty easy to do.
We don't even have to force them to disclose anything, by affording less legal protection, their employees will open-source it for us.
Definitely, though my interpretation of your statement is "self driving cars have only killed a couple people ever but human cars have killed hundreds of thousands". If that's correct, that's not going to win anyone over nor is it necessarily correct.
While the state of AZ definitely has some responsibility for allowing testing of the cars on their roads, Uber needs (imo) to be able to prove the bug that caused the accident was so much of an edge case that they couldn't easily have been able to foresee it.
Are they even testing this shit on private tracks as much as possible before releasing anything on public roads? How much are they ensuring a human driver is paying attention?
Maybe because its unexpected - the victim is not involved until they are dead?
I mean, people play the lottery. That's a guaranteed loss, statistically speaking. In fact, it's my understanding that, where I live, you're more likely to get hit by a (human-operated) car on your way to get your lottery ticket than you are to win any significant amount of money. But still people brave death for a barely-existent chance at winning money!
Tangent: is there a land vehicle designed for redundant control, the way planes are? I've always wondered how many accidents would have been prevented if there were classes of vehicles (e.g. large trucks) that required two drivers, where control could be transferred (either by push or pull) between the "pilot" and "copilot" of the vehicle. Like a driving-school car, but where both drivers are assumed equally fallible.
Do we even know yet what's happened?
It seems rather in bad taste to take someones death, not know the circumstances then wax lyrical about how it matches what you'd expect.
But the problem is Uber's business plan is to replace drivers with autonomous vehicles ferrying passengers. i.e. take the driver cost out of the equation. Same goes for Waymo and others trying to enter/play in this game. It's always about monetization which kills/slows innovation.
Just highway-mode is not going to make a lot of money except in the trucking business and I bet they will succeed soon enough and reduce transportation costs. But passenger vehicles, not so much. May help in reducing fatigue related accidents but not a money making business for a multi-billion dollar company.
That being said, really sad for the victim in this incident.
Why did this happen? What steps have we make sure it will never happen again? These are both methods of analysing & fixing problems and methods of preserving a decision making authority. Sometimes this degrades into a cynical "something must be done" for the sake of doing, but... it's not all (or even mostly) cynical. It just feels wrong going forward without correction, and we won't tolerate this from our decision makers. EVen if we will, they will assume (out of habit) that we won't
We can't know how this happened. There is nothing to do. ..and.. this will happen again, but at a rate lower than human driver's more less opaque accidents.... I'm not sure how that works as an alternative finding out what went wrong and doing something.
Your comment is easily translated into "you knew there was a glitch in the software, but you let this happen anyway." Something will need to be done.
I think any attempts to address such issues have to come with far-ranging transparency regulations on companies, possibly including open-sourcing (most of) their code. I don't think regulatory agencies alone would have the right incentives to actually check up on this properly.
In a nearby town, people have petitioned for a speed limit for a long time. Nothing happened until a 6 year old boy was killed. Within a few weeks a speed limit was in place.
One of the big questions I have about autonomous driving is if it's really a better solution to the problems it's meant to solve than more public transportation.
I think this is really key. The ability to put the blame on something tangible, like the mistakes of another person, somehow allows for more closure than if it was a random technical failure.
It boggles my mind that a forum full of computer programmers can look at autonomous cars and think "this is a good idea".
They are either delusional and think their code is a gift to humanity or they haven't put much thought into it.
Autonomous cars, as they exist right now, are not up to the task at hand.
That's why they should still have safety drivers and other safeguards in place. I don't know enough to understand their reasoning, but I was very surprised when Waymo removed safety drivers in some cases. This accident is doubly surprising, since there WAS a safety driver in the car in this case. I'll be interested to see the analysis of what happened and what failures occurred to let this happen.
Saying that future accidents will be "unexpected" and therefore scary is FUD in its purest form, fear based on uncertainty and doubt. It will be very clear exactly what happened and what the failure case was. Even as the parent stated, "it saw a person with stripes and thought they were road" - that's incredibly stupid, but very simple and explainable. It will also be explainable (and expect-able) the other failures that had to occur for that failure to cause a death.
What set of systems (multiple cameras, LIDAR, RADAR, accelerometers, maps, GPS, etc.) had to fail in what combined way for such a failure? Which one of N different individual failures could have prevented the entire failure cascade? What change needs to take place to prevent future failures of this sort - even down to equally stupid reactions to failure as "ban striped clothing"? Obviously any changes should take place in the car itself, either via software or hardware modifications, or operational changes i.e. maximum speed, minimum tolerances / safe zones, even physical modifications to configuration of redundant systems. After that should any laws or norms be changed, should roads be designed with better marking or wider lanes? Should humans have to press a button to continue driving when stopped at a crosswalk, even if they don't have to otherwise operate the car?
Lots of people have put a lot of thought into these scenarios. There is even an entire discipline around these questions and answers, functional safety. There's no one answer, but autonomy engineers are not unthinking and delusional.
It is not that we think that software is particularly good, it is that we have a VERY dim view of humanity's ability to do better.
I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road