Neither the driver nor the car manufacturer will have clear responsibility when there is an accident. The driver will blame the system for failing and the manufacturer will blame the driver for not paying sufficient attention. It's lose-lose for everyone. The company, the drivers, the insurance companies, and other people on the road.
Everybody who's looked at this seriously agrees. The aviation industry has looked hard at that issue for a long time. Watch "Children of the Magenta". This is a chief pilot of American Airlines talking to pilots about automation dependency in 1997. Watch this if you have anything to do with a safety-related system.
It seems that we are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation. The pattern is common to our time but is acute in aviation. Air France 447 was a case in point. - William Langewiesche, 'The Human Factor: Should Airplanes be Flying Themselves?', Vanity Fair, October 2014
Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator. - Ashwin Parameswaran (2012)
... from my fortune clone @ http://github.com/globalcitizen/taoup
You are correct in the ratio (1:100) but I think you mistook departures for flights and extrapolated 2 orders of magnitude too far.
You sure about that number? Seems incredibly high to me.
Maybe 3 billion passengers/year?
This isn't just a matter for Tesla. The auto industry is rapidly heading for much better assistive driving systems. There's no way that the people heads-down in their cell phones are going to do this less once they realize they don't really need to pay attention.
Will accident rates get better overall anyway? Who knows? But systems that aren't intended for autonomous use are going to get used that way.
I loved this system and have always wished for something like this on a car in the US. I've never liked the common cruise control in the US -- where you set a target and it applies the gas for you -- because I didn't like how removing my foot from the gas pedal moved my foot fractions of a second further from the brake in an emergency.
Probably did not wait long enough for the reply button to appear.
Though I'd check my ballast too, and probably read a book.
It's not even just that, it's worse at managing the use of the gas overall than a human with regards to cars as of 2015, it seems like the system is worse at managing the gas than humans are. I'm not a huge fan of cruise control as defined above because I find it makes me really inattentive (my problem, of course), but the benefit of not using the cruise control is that it seems like you get much better mileage. My family used to have to drive regularly between Minnesota, Wisconsin, and Illinois for a few years, and I would always get better mileage not using cruise control than my brothers (who used cruise control) would. The difference would often be as much as half a tank of gas or more on older cars (2000-2010) and on relatively newer cars (2015) it'd still be a difference of a fair number of gallons.
I think the systems just aren't good at predicting when to coast and when to accelerate, and for very hilly regions, this means a lot of wasted gas.
Another feature is speed warnings, which "beep" at you when you exceed them. Currently they only seem to be singular, but it should be possible to integrate them with satnav and speed sign recognition. I expect these would be safer, especially if it linked up with a, "black box" to report excessive violations with a parent, insurance company, police etc.
This way my foot can readily cover the break pedal and I initiate breaking much quicker than off-gas>on-break
I view cruise control as a safety feature. I can keep my foot hovering over the brake pedal instead of on the accelerator, reducing reaction time in a crisis. Maintaining attention on the road has never been a problem for me, though I suppose the hovering-foot posture helps.
One catch there is that it'll stop automatically, but it won't ever go if it came to a full stop - you need to tap the gas to reactivate cruise. If the car in front of you starts moving, but you don't move, it'll make a noise (both audio, and visual on the dashboard) to remind you that you're supposed to do that. I suppose that's a kind of a safeguard to keep the human alert?
This could partly be a consequence of living in the Pacific Northwest... lots of winding mountain highways!
The lane tracking isn't that great, but I don't mind it that much. I don't use it much for normal driving, but I found it it's pretty fantastic in heavy crosswinds. The car does a pretty good job of keeping the lanes (assuming you can see them well enough) so you basically drive like normal instead of having to constantly fight wind gusts.
Under normal conditions it doesn't do enough to be terribly useful and less you're not paying enough attention… at which point you shouldn't be using it anyway.
Cruise control seems fairly harmless - you still have to keep lanes, and keeping your foot on the gas isn't particularly demanding either. I largely use cruise control because I am able to save on gas that way, by avoiding unnecessary acceleration/deceleration. Combination with lane following and holding of distance is more problematic imo.
The counter-argument is that they do this without Autopilot anyway. Given that they're already not paying attention, adding in Autopilot seems like a net gain.
Also, as a pilot, I can tell you that the Tesla Autopilot functions very much like what we have in planes. It steers the vehicle and significantly decreases workload while increasing overall safety but needs to be constantly monitored and can easily be misconfigured or operate in unexpected ways.
Remind me not to drive in your neighborhood. (This appears to be hyperbole.)
People rear-end each other in my city every day. On my commute, I'll come across two a day. Wet days, a few more. The other five directions headed into the city I would expect to see similar statistics to my experience. Just listening to traffic radio, there's going to be at least five crashes around the city; almost always more. They don't report on fender benders.
I don't think I ever saw three separate rear-ends in a single drive. I can't say for certain that I ever saw two. I didn't drive it every day, but I probably did 100 such commutes.
You sound either like you're massively exaggerating or live someplace with apocalyptic traffic.
I drive 30 miles through Chicago traffic on the Interstate everyday. I see at least 3-4 accidents per week (people pulled over to the side of the road, out of the way, or at the accident investigation sites.) Most of these are minor fender benders. I'm sure if I was in rush hour traffic for the full 4-6 hours (not just my 90 minutes of commute) I would see way more. They mostly happen in near bumper-to-bumper stop and go traffic (someone misses the guy in front of them stopping) or when traffic unexpectedly comes to a standstill from going 10-20 MPH.
Every morning and at least two of the evenings. And the weather has been reasonable. I typically see 4 to 5 Teslas a day during my commute... driving sedately, and they aren't they ones involved in the accident.
Maybe we'll break the streak tomorrow and have no accidents.
Yeah, if you spend 4-6 hours a day in traffic, you'd see a lot of accidents. That... seems uncontroversial.
The Greater Toronto Area has approx. 20% of Canada's population (~35M). If we assume that 300 injuries are evenly distributed across Canada, which seems unlikely due to how bad the driving conditions are on the 401 and DVP, there are ~60 injuries per day in the GTA of various severities.
I don't think someone encountering ~3 per commute during rush hour is unreasonable.
A few months ago I saw a guy, face-down in his phone, smash into an existing accident that was already surrounded by fire trucks. That was amazing.
The guy I was replying to said that he saw at least 2 every commute, often more. (Edit: Actually, sorry, 2 every day, not every commute).
Whenever you get full speed traffic occasionally interrupted by traffic jams (from whatever cause, other accident, tolls, weather, low angle sunlight, construction, etc.), you'll get a higher incidence of rear-enders. Especially when the tail of the slow/stopped traffic is at a point just past a hill or curve.
I got rear-ended myself some years ago in just such a situation, clear sky & dry road. The traffic ahead had slowed dramatically driving into a section where the bright, low winter sun was in everyone's eyes, we couldn't see that before the gentle bend & rise in the road, I saw the slowing traffic & had to brake hard, the person behind me braked late and hit me even though I'd tried to stretch my braking as much as possible to give her more room. There was another similar accident minutes later just behind us.
This kind of rapid-slowing situation in tight-fast traffic will likely even get out of hand even for automated cars, unless there is car-to-car communication. This is because of the slight delay for each successive slowing car in the accordion-effect accumulates to the point where eventually the required reaction time decreases and required deceleration rate increases past the performance envelope. At that point, a crash is inevitable.
With car-to-car communication and automation, the last car in the pack can start braking almost simultaneously with the first one and avoid this.
So, no, it's not hyperbole, it's ordinary.
Is this really true?
It seems like, as long as the following delay between cars is greater than that reaction delay, there should be no such "accordion effect."
And yes, when you have an urban highway commute of any distance, it is not unusual to see that many crashes. maybe not every day, but not far off, and enough that you cannot rely upon commute times, precisely because the crashes are so unpredictable.
You might try actually reading other posts before replying with trivial inaccurate potshots. sheesh
2 is average for about a 60 mile drive during slightly off-rush. I suspect rush is higher.
And certain areas just seem to attract idiots.
What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers . It seems probable Level 2 systems will be better still.
A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.
> Neither the driver nor the car manufacturer will have clear responsibility when there is an accident
This is not how courts work. The specific circumstances will be considered. Given the novelty of the situation, courts and prosecutors will likely pay extra attention to every detail.
Pilots and conductors are trained professionals. The bar is lower for the drunk-driving, Facebooking and texting masses.
> Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be
This does not appear to be bearing out in the data .
The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous.
A Level 2 autopilot should be able to better predict when it will need human intervention. If the autopilot keeps itself in situations where it does better than humans most (not all) of the time, the system will outperform.
My view isn't one of technological optimism. Its derived from the low bar set by humans.
That's why Google concluded that L5 was the only way to go. You only get the benefit of computers being smarter than humans if the computer is in charge 100% of the time, which requires that its performance in the 1% of situations where there is an emergency must be better than the human's performance. That is the low bar to meet, but you still have to meet it.
Humans regularly mess up in supposedly-safe scenarios. Consider a machine that kills everyone in those 1% edge cases (which are in reality less frequent than 1%) and drives perfectly 99% of the time. I hypothesise it would still outperform humans.
Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.
Some quick googling suggests that the fatality rate right now is roughly 1 per 100 million miles. So, for certain fatality in the case of human control to be an improvement, it would have to happen only about once in the lifespan of about every 500 million cars. In other words, the car would, for all practical purposes, have to be self driving.
The part that really bothers me (for some reason) is that those edge cases are frequently extremely mundane, uninteresting driving situations that even a child could resolve. They simply confuse the computer, for whatever reason.
I'm genuinely interested to see how consumers react to a reality wherein their overall driving safety is higher, but their odds of being killed (or killing others) are spread evenly across all driving environments.
Imagine the consumer (and driving habits) response to the first occasion wherein a self-driving car nicely drives itself through a 25MPH neighborhood, comes to a nice stop at a stop sign, and then drives right over the kid in the crosswalk that you're smiling and waving at. Turns out the kids coat was shimmering weirdly against the sunlight. Or whatever.
You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.
I hear you trying to make a Trolley Problem argument, but that is not the issue here. L2 is dependent on humans serving as a reliable backup.
I understand the concern. I am saying the problem of slow return from periods of extended inattention is not significant in comparison to general human ineptitude.
Level 2 systems may rely on "humans serving as a reliable backup," but they won't always need their humans at a moment's notice. Being able to predict failure modes and (a) give ample warning before handing over control, (b) take default action, e.g. pulling over, and/or (c) refusing to drive when those conditions are likely all emerge as possible solutions.
In any case, I'm arguing that the predictable problem of inattention is outweighed by the stupid mistakes Level 2 autopilots will avoid 99% of the time. Yes, from time to time Level 2 autopilots will abruptly hand control over to an inattentive human who runs off a cliff. But that balances against all the accidents humans regularly get themselves into in situations a Level 2 system would handle with ease. It isn't a trolley problem, it's trading a big problem for a small one.
The original point - that level 2 is a terrible development goal for the average human driver - still stands.
Also, you are definitely invoking the trolley problem: trading a big number of deaths that aren't your fault for a smaller number that are. Again, not the issue here. L2 needs an alert human backup. Otherwise it could very well be less safe.
But I would say the thrust of your argument is not that off, if we just understand it as "we need to go beyond L2, pronto".
The problem here is NOT the untrained driver -- it is the attention span and loss of context.
I've undergone extensive higher training levels and passed much higher licensing tests to get my Road Racing license.
I can tell you from direct experience of both that the requirements of high-performance driving are basically the same as the requirements to successfully drive out of an emergency situation: you must
1)have complete command of the vehicle,
2) understand the grip and power situation at all the wheels, AND
3) have a full situational awareness and understand A) all the threats and their relative damage potential (oncoming truck vs tree, vs ditch, vs grass), and B) all the potential escape routes and their potential to mitigate damage (can I fit through that narrowing gap, can I handbrake & back into that wall, do I have the grip to turn into that side road... ?).
Training will improve #1 a lot.
For #2, situational awareness, and #3, understanding the threats and escapes, there is no substitute for being alert and aware IN THE SITUATION AHEAD OF TIME.
When driving at the limit, either racing or in an emergency, even getting a few tenths of a second behind can mean big trouble.
When you are actively driving and engaged, you HAVE CURRENT AWARENESS of road, conditions, traffic, grip, etc. You at least have a chance to stay on top of it.
With autopilot, even with the skills of Lewis Hamilton, you are already so far behind as to be doomed. 60 mph=88 feet/sec. It'll be a minimum of two seconds from when the autopilot alarms before you can even begin to get the situation and the wheel in hand. You're now 50 yards downrange, if you haven't already hit something.
Even with skills tested to exceed the next random 10,000 drivers on the road, the potential for this situation to occur would terrify me.
I might use such a partial system in low-risk situations like slow traffic where its annoying and the energies involved are fender-bender level. Otherwise, no way. Human vigilance and context siwtching is just not that good.
I can't wait for fully-capable autodriving technology, but this is asking for trouble.
Quit cargo-culting technology. There is a big valley of death between assist technologies and full-time automation.
It's a question that both sides of the discussion claim answers to, and both sound reasonable. The only real answer is data.
As you've said, killing 100% of the time in the 1% scenarios may very well be better than humans driving all the time. Better, as defined by less human life lost / injuries.
Though, one minor addition to that - is human perception. Even if numerically I've got a better chance to survive, not be injured, etc - in a 99% perfect auto-car, I'm not sure I'd buy it. Knowing that if I hear that buzzer I'm very likely to die is.. a bit unsettling.
Personally I'm just hoping for more advanced cruise control with radar identifying 2+ cars ahead of me knowing about upcoming stops/etc. It's a nice middle ground for me, until we get the Lvl5 thing.
Not dead, which I feel is important to point out. Involved in an incident, possibly a collision or loss of lane, but really it's quite hard to get dead in modern cars. A quick and dirty google shows 30,000 deaths and five and a half million crashes annually in the US - that's half a percent.
So in your hypothetical the computer drives 99% of the time, and of the 1% fuckups, less than 1% are fatal.
Even if the system has high confidence in its ability to handle a situation, if sufficient time has passed, request the driver resume control.
Then fusion the driver's inputs w/ the system's for either additional training data or backseat safety driving (e.g. system monitoring human driver).
More productively, Tesla currently senses hands on the wheel. Perhaps they could extend that with an interior camera that visually analyzes the driver's face to ensure their eyes are on the the road.
Actually the study explicitly doesn't show that.
First of all, in the study, it purely measures accident rate before and after installation, so miles driven by humans are in both buckets. Second of all the study is actually comparing Tesla before and after the installation of Auotsteer and prior to the installation of Autosteer, Traffic Aware Cruise Control was already present. According to the actual report:
The Tesla Autopilot system is a Level 1 automated system when operated with TACC enabled and a Level 2 system when Autosteer is also activated.
So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car. Extrapolating that to actual miles driven with level 2 versus level 1 is beyond the scope of the study and comparing level 1 or level 2 to human drivers is certainly beyond the scope of the study.
You are correct. We do not have definitive data that the technology is safe. That said, we have preliminary data that hints it's safer and nothing similar to hint it's less safe.
Safer than? Human driving? No, we don't.
> So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car.
which seems to disagree with the leading statement of the first comment in this thread:
> The level-2 driving that Tesla is pushing seems like a worst case scenario to me
As far as I know it is indeed correct that autopilot safety is statistically higher than manual driving safety (albeit with a small sample size).
However, something has always bothered me about that comparison ...
Is it fair to compare a manually driven accidental death (like ice, or wildlife collision) with an autopilot death that involves a trivial driving scenario that any human would have no trouble with ?
I don't know the answer - I'm torn.
Somehow those seem like apples and oranges, though ... as if dying in a mundane (but computer-confusing) situation is somehow inexcusable in a way that an "actual accident" is not.
That's a good question. Clearly, existing self-driving tech is safer than human drivers on average. However, "average" human driving includes texting while driving, drunk driving, falling asleep at the wheel, etc. Is the appropriate comparison the "average" driver, or a driver who is alert and paying attention?
The most appropriate comparison set would be the drivers who will replace themselves with autopilot-steered vehicles.
Have there been any Tesla autopilot fatalities with the right conditions to spark outrage? That's a sincere question as maybe I've missed some which would prove your point.
The only major incident I'm aware of is one in which only the driver of the car was killed. In an accident like that it is easy to handwave it away pretty much independent of any specifics (autopilot or no).
A real test of public reaction would involve fatalities to third parties, particularly if the "driver" of the automated vehicle survived the crash.
The actual performance of these machines will be the ultimate test. If it does consistently improve safety then I don't really see much barriers existing here, the current unknowns and semantics surrounding it will be worked out in markets and in courts over an extended period of time and will ultimately be (primarily) driven by rationality in the long run.
The safest option will be the way the market will be incentivised, despite all the noise around it this is the ultimate rational market.
Insurance is so boring it is interesting to me.
It depends on how you measure this. We always talk about humans being bad at driving. Humans are actually amazingly good drivers conditioned upon being alert, awake, and sober. Unfortunately a good fraction of people are in fact not alert. If you don't condition on this, than yes, humans suck.
(Put another way, the culture we, including companies such as Tesla, foster of working people overtime is probably more responsible for car accident deaths than anything else.)
The FAA takes pilot rest time seriously. Considering car accident deaths exceed airplane deaths by a few orders of magnitude, it's about time the rest of the world take rest equally seriously as well.
Comparing autopilot Tesla fatalities versus average fatality rate on one road section is dishonest.
>According to Tesla there is a fatality every 94 million miles (150 million km) among all type of vehicles in the U.S.
>By November 2016, Autopilot had operated actively on hardware version 1 vehicles for 300 million miles (500 million km) and 1.3 billion miles (2 billion km) in shadow mode.
Those numbers are 9 months old and only apply to Autopilot v1 and not the Autopilot v2+ introduced late last year. I wouldn't be surprised if the current number is in the 500+ million mile range with only a single fatality. The sample size is obviously small, but there seems to be a clear improvement over manual control.
 - https://en.wikipedia.org/wiki/Tesla_Autopilot
EDIT: With chc's and my post we have 3 numbers and dates for reported Autopilot miles. Projecting that forward at a linear rate (which is conservative given Tesla's growth) would put us at roughly 750 million miles today.
What I wonder when I see these statistics, though, is whether all miles are really equal? For example, are Tesla drivers more comfortable using Autopilot in "easy" driving situations? Is there really a one-to-one correspondence in the distribution of the kinds of miles driven with Autopilot on vs. normal cars?
Furthermore, the metric commonly cited is "fatalities ever N miles." Are there fewer fatalities because Autopilot is great, or because Teslas are safer in general? Has there been a comparison between fatalities with/without Autopilot strictly on Teslas? Even then, it seems to me we are subject to this potentially biased notion of "miles" I mentioned previously. The Wikipedia article you mentioned cites a 50% reduction in accidents with Autopilot, but the citation is to an Elon Musk interview. I haven't yet seen anything official to back this up, but if anyone has any information on this, I'd love to see it!
I'd love to see official work that explores that angle (rather than a claim from an interview, which is what the Wikipedia article refers to), I just haven't seen any document/study about it yet.
That's got to open up some liability questions. You can bet when people die it will be tested in court. You could make a case that Tesla's going to be liable for many level 2 accidents in the long run anyway, so might as well go all in ASAP.
I think cameras will hit parity with the human eye, but the question is will it happen before or after Lidar becomes more affordable and compact.
I've probably gone about 8,000 miles on autopilot in mine (AP1) and it is truly amazing. After a road trip stint of ~200 miles, I feel much more energized and less fatigued in the Tesla with autopilot 99% of the way than I do in previous non-autopilot cars. It really is a game changer. You may think regular driving doesn't require much brain activity, or that the Tesla cruise control and auto steering don't do much, but you really don't realize how much your brain is working to handle those things until you can take a step back and let the care do most of the work. Then you can focus on other things while on the road that you didn't realize before. The behavior of other drivers for example. I can watch a car near me with greater focus and see what that person is doing.
Regardless, if you have not driven one, I highly encourage it. You really need to take it out on the highway and drive it for 30 miles+ to really understand how amazing it is. I've driven other cars with "autopilot", and just like car reviews say, they are nowhere close. (Mercedes, Cadillac, Volvo, others with just auto cruise control). It's just one more reason why current Tesla owners are so fanatic about their car, there is nothing else like it and most likely won't be until ~2020 (maybe).
Would you say the same after if you happen to get involved in a serious autopilot accident though? That's the question.
It's very easy to be all roses before one sees any issues.
For instance, the one single fatality of autopilot to date in a Tesla, where the guy was coming up to a crest of a hill with a white 18 wheeler crossing perpendicular to the highway. Yes, the autopilot misread the 18-wheeler to be a sign above the road. (This issue is not fixed). However at the same time, the guy completely disregarded Tesla's instructions of keeping your eyes on the road at all times. Turns out, he was watching a show on his phone.
But yes, I would still say the same thing if I was using autopilot properly as intended. i.e., not watching movies while in the drivers seat of a moving car (which is against the law regardless.) I don't think there are any serious accidents to date where the driver was using it properly and following the rules. As Tesla states, autopilot is in beta (and most likely always will be), that's not to say it is unsafe, but that the driver must be aware and follow the rules and know what autopilot is and is not capable of.
I'd say it took me about two weeks of first using autopilot to understand its capabilities.
Also the best part, it keeps improving in my vehicle through updates. It's pretty impressive how good the updates from Tesla are.
Only people have been driving themselves for a century, and have a pretty good idea of how safe it is, including how safe it is for them and their skills / city / etc, as opposed to some "one size fits all" average.
>(sure isn't statistics)
Well, it can't be statistics because all we've got is the BS "statistics" from the self-driving car companies. Only a sucker for new technology would accept those, as there are tons of differences between regular driving. They take the cars out at optimal conditions (not rain, snow etc), they drive specific routes (not all over the US), they often have supplementary humans on board to check (do they count the times when humans had to take control and saved the day as "self-driving accidents" or not?), and tons of other things besides.
I've never seen autopilot disengage by itself. It will after 3 warnings to the drive where they don't touch the wheel, I've never done that though.
There was one time, not quite an emergency, but basically my contact fell out of my eye, into my lap somewhere, and because I had autopilot on, it allowed me to safely use a few extra seconds needed to look down to find it. Whereas without autopilot, the number of seconds it took to find it normally would be considered very dangerous to look away from the road that long.
I believe Auto transmission (AT) car drivers are more easily distracted than those driving stick shift or manual transmission (MT)).
But AT makes life easy for so many people, that no one even considers if AT is making drivers less attentive.
A classic example of this, the worst case scenario I have seen in person, is that of a woman talking on a mobile phone in one hand while switching lanes and taking a left turn at a traffic signal. Things like this just wouldn't be possible if that woman was driving a stick shift.
In that driver's case, probably a Tesla like Level 3-5 driving system would be ideal, making it much safer for drivers around.
So, should we go back to MT for everyone or nothing? Just to be under the impression that it's much safer? Or should every vehicle be a level 3 to level 5 autonomous vehicle?
I think people choosing any of the above options will have valid points and research to prove their points. Only time will tell when and how any of those research results continue to hold true.
FWIW, in the UK where manual is more or less the norm, using a phone whilst driving is still fairly common (the penalties were increased recently due to people ignoring the law). From what I've seen, a lot of people will use the gear stick with the phone in the same hand (i.e. brief break in conversation whilst they switch gear).
i heard that this is why autobahns are made with unnecessary curves/turns.
This approach worked well (for some parties) in many other areas, for example in education: now the teachers are the only ones left responsible for the structuraly failing education system.
Now, if these were 1930s, with hundreds of independent auto makers, perhaps the invisible hand of the market would fix this.
I don't see how you can get to L5 without L2 and the learning that is going on in these cars today.
You can use the automation as a backup safety feature until it's L4. For example, the car has the ability to stop if you try running a red light, but you otherwise are required to drive the vehicle.
Automated cars don't just have to be better than humans, they have to be as good as humans with automated safety features.
So there is always something to do.
This isn't building a bridge. If a company builds a bridge and it collapses and kills people and it turns out they didn't hire qualified structural engineers or that the CEO ignored warning from the engineers to push the project for a scheduled release window or to keep profits high - the CEO goes to prison for criminal negligence. With self-driving cars, it's a different story COMPLETELY. You're talking about SOFTWARE. No company that's killed people with software has ever found themselves being found guilty of criminal negligence. And they won't for the forseeable future.
This is how self-driving cars will go. I'll give you whatever you want if I'm wrong. I'm that confident. A company, using standard modern business practices (that means doing absolutely every single thing research has shown destroys productivity and ensures the end product will be as unreliable, and poorly engineered as possible. Open floor plan offices, physical colocation of workers, deadlines enforced based on business goals rather than engineering goals, business school graduates being able to override technical folks, objective evidence made subservient to whoever in the room is more extroverted, aggressive, or loud, etc. You know, you probably work in exactly the kind of company I'm talking about because it's almost every company in existence. Following practices optimized over a century to perform optimally - at assembly-line manufacturing processes. And absolutely antithetical to every single aspect of mental work.) will rush to be first to market. Maybe they'll sell a lot, maybe not. That's hard to call. What's NOT hard to call is the inevitable result. Someone dies. Doesn't matter if its unavoidable or not. No technology is perfect. That doesn't matter either. Won't matter what "disclaimers" the company tries to pull trying to say it's in the drivers hands. The courts won't care about that either.
But... they will absolutely get away with it. They will not be fined, they will not be forced to change their practices (most likely they will not even be made to REVEAL their development practices at all). You see, if the courts bother to ask what their practices are, their lawyers will point out it doesn't matter. There's no such thing as "industry standard practices" that you could even CLAIM they failed to follow. So their software had bugs. As far as the court is concerned, that's a fact of life, it's unavoidable, and no company can be held responsible for software bugs. Not even if they kill people.
So they'll get away with it - in the courts. In the court of public opinion? Nope. You see, even if they made their self-driving cars out of angel bones and captured alien predictive technology and it never so much as disturbed someones hairdo, they are destined to fail as far as the public is concerned. Because human beings are, shocker, human beings. They have human brains. Human brains have a flaw that we've known about for ages. Well, by "we", I mean psychologists and anyone whose ever cared enough to learn Psych 101 basics about the brain. There is an extremely strong connection between how in-control a person feels they are and how safe they feel they are. Also, feeling safe is stupendously important to humans. This is why people are afraid of flying. If things go wrong, there's nothing they can do. (The same is true when they're driving a car, but people are also often just wrong and they wrongly think they have some control over whether they have a car accident or not. No evidence suggests they have the ability to avoid most accidents.) If the self-driving car goes wrong while they're not paying attention, nothing they can do. People will be afraid of them as they are of flying.
And if you haven't noticed, our society deals poorly with fear. They LOVE it way too much. They obsess over it. They spend almost every waking hour talking about it and patting themselves on the back about what they're doing or going to do to fix their fears and the fears that threaten others. Mostly imagined fears, of course, because we're absurdly safe nowadays. So it will be the only thing talked about until unattended driving laws get a tiny extension to cover the manufacturing of any device which claims to make unattended driving safe. It'll pass with maybe 1 or 2 nay votes by reps paid for by Uber, but that's it.
This is a great analogy, because at the dawn of flight, many people were really, really really afraid of flying -- for very good reasons: the airplanes of the day were incredibly dangerous.
Yet people still flew, and flew more and more, despite many very public disasters in which hundreds of people died, and the airline industry grew and flourished.
Now most people don't think twice about flying, as long as they can get a ticket they can afford. Sure, some people are still afraid of flying, but most of even them fly anyway if they have to, and the majority aren't afraid at all or don't think about it.
When a mom is saying 'I'm not putting my children in one of those killmobiles' and someone says 'well actually ma'am its much safer and you're endangering your childs life significantly by taking the wheel', that person gets punched in the face and lambasted on social media as an insensitive creep. That's just how it goes.
Are you sure? What about IEC 61508 and ISO 26262? The latter especially, as it was derived as a vehicle-specific version of the IEC standard.
It's an industry-wide standard:
...geared specifically to ensuring the safety of electrical, electronic, and software of vehicles.
Look it up - you'll find tons of stuff about it on the internet (unfortunately, you pay out the * for the actual spec, if you want to purchase it from ISO - it's the only way to get a copy, despite the power of the internet, afaict).
...and that's just one particular spec; there are tons of others covering all aspects of the automobile manufacturing spectrum to attempt to ensure safe and reliable vehicles.
Are they perfect? No. Will the prevent problems? No.
But to say there isn't any standards to look to isn't true.
Mostly, the drivers, insurance, companies and other people on the road have fewer accidents.
My point is, that all of this affects the public perception of self-driving cars - and if we want them to succeed, we need to make absolutely sure that that perception is good. We can't have the nonsense Tesla is trying to pull off at the moment, where they call their system "autopilot" but they know the system cannot detect obstacles at around pillar-height and it gets blinded by the sun and can swerve into the oncoming lane before just switching off. These are not theoretical problems - both happen in cars that are out there, right now. And if it happens to a regular Joe Smith, then Mr. Smith will think the technology is crap, and we can't let that happen.
We make these exact choices all the time.
I am saying we should "let more people die" now, so that we can save more later. That's not a novel concept.
Ideally, we should embrace them even if they are slightly more dangerous than human drivers, because we are getting the benefit of the time that would otherwise be spent driving.
I've been saying for years that the right approach was to take the technology from Advanced Scientific Concepts' flash LIDAR and get the cost down. I first saw that demonstrated in 2004 on an optical bench in Santa Monica. It became an expensive product, mostly sold to DoD. It's expensive because the units require exotic InGaAs custom silicon and aren't made in quantity. Space-X uses one of their LIDAR units to dock the Dragon spacecraft with the space station.
Last year, Continental, the big century-old German auto parts maker, bought the technology from Advanced Scientific Concepts and started getting the cost down. Volume production in 2020. Interim LIDAR products are already shipping in volume. Continental is quietly making all the parts needed for self-driving. LIDAR. Radar. Computers. Actuators. Cameras. Software for sensor integration into an "environment model". They design and make all the parts needed, and provide some of the system integration.
Apple and Google were trying to avoid becoming mere low-margin Tier I auto parts suppliers. Continental, though, is quite successful as a Tier I auto parts supplier. Revenue of €40 billion in 2016. Earnings about €2.8 billion. Dividend of €850 million. They can make money on low-margin parts.
Continental may end up quietly ruling automatic driving.
I suspect that if/when LIDAR is cheap enough, Tesla will use it.
In the meantime they outfit every single car with the best hardware that is realistic from a cost standpoint today, instead of waiting til 2020.
And so far, it just sits there and does nothing useful, since the self-driving software that can do the job safely with those sensors doesn't exist.
Musk's argument is that we know for certain that the
entire road system can be navigated by visual cues
It messes with formatting on both mobile
I know this behavior happens with IE, I believe I've seen it in other desktop browsers but can't verify at the moment.
There are a multitude of mobile apps for consumption of HN, most of which solve the code-block-line-overflow problem by adding a horizontal scroll, which only makes the problem worse by forcing users to scroll side to side to read the full quote.
Just use `>` at the start of the line to signify that it's a quote, please.
HN does that natively - just drag the code block around with your finger. Still sucks to use though, so > quoting is the way to go.
People and machines behave and understand the world differently. Just because it works for people doesn't mean it'll work for computers.
A vast amount of data has produced every driver.
Now if you want to just look "total experience in the world as a whole" the numbers look a bit different, but if anything that just accentuates the differences here. We don't currently have a way to teach computers to construct mental models of how the world works the same way humans think about it, which would be necessary to use training data that was about the world as a whole.
I agree fundamentally though that it's a weak argument to say a certain approach should be taken just because it's how humans do it.
i'm hesitant to argue against Musk, yet the actual goal is to navigate better than humans, and it seems reasonable to suspect that having wider range sensors would be a good (and may be necessary) foundation to achieve it.
Without good hardware then you're stuck jumping through hoops to do all the processing software-side which leaves less time to react quickly and accurately.
I worked on a team that evaluated the ASC unit a few years ago, but they found it unusable due to bloom issues. Has that changed?
Flash LIDAR has a tradeoff between range and field of view. You can concentrate your energy or fan it out. There's been some interest in systems where you can narrow the output beam when you need to. Or you can combine wide-beam short-range units with narrow-beam long range ones.
There are other flash LIDAR companies, but many of them are vaporware. Quanergy, which bills itself as "the leading provider of solid state LIDAR sensors" announced a unit last year, but doesn't seem to have shipped. Velodyne wants to come out with a solid state product. TetraView is trying for a low-end semiconductor solution that uses common sensors. Luminar and LeddarTech want to use MEMS mirrors.
A long-range narrow-beam MEMS-steerable flash LIDAR might be useful. Look out 300m in the direction you're going when at high speed, and use other wide-angle sensors for a side view and in cluttered city environments.
Somebody is going to get this working at an acceptable price point reasonably soon.
(as an aside: I don't buy the moving parts argument -- a modern car has thousands, many of which have much tougher jobs than "spin at 900 RPM while on".)
I think I agree with this, but is LIDAR expected to work in the rain?
Instead ask: "In good weather, can LIDAR-less systems match the safety of LIDAR?" The answer is "not yet"
Until LIDAR-less systems can match the safety of LIDAR they will simply be banned from the roads, or limited to certain situations such as rainy weather.
Regulators, politicians and society will not allow Tesla to operate a system which in good weather has a much higher accident rate than is necessary - and they will not accept "cost reasons" as a valid excuse for not installing a LIDAR.
Laws are frequently named after a single child who died.
And Tesla's engineers aren't the first to bellyache about being asked to make the impossible a reality.
I don't think the majority understands what safety means in mass transportation. It's not about running miles and miles without accidents and basically saying "see"? It's about demonstrating /by design/ that the /complete/ system over its /complete/ lifetime will not kill anyone. In terms of probability of failure it translates in demonstrated hazard rates of less than 1E-9 /including the control systems/. This take very special techniques and if that could've been done using only vehicle sensors, it would have been adopted by us long ago. I am also sorry to report that doubling cameras and sensor fusion will not get you an acceptable safety level. We've tried that too, rookies.
Is it "fair", to use Elon's argument? After all, isn't additional safety enough compared to existing situation. Ah but we have been there too! For driver assistance it is indeed better. Similar systems were deployed during the second half of 20th century (e.g. KVB, ASFA, etc). But the limit is clear. It only /improves/ driver's failure rate. It does not substitute for the driver. If you substitute, you have to do much much much better. Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver". Is it fair? Maybe not, but that's the whole point for entrusting lives to a machine.
In other words, it is you that is taking your own opinion for the universal truth. You reason in a model where the driver is the only responsible party for reckless driving.
An even bigger driving factor (pun intended) will be that the cost of a driver relative to operation is much, much higher in a car. A solution that is better on both safety and cost will be quickly adopted.
It does not matter how well your car can brake. Can your autonomous driving system guarantee it will always brake as hard and as fast as required when you will need it? Can you guarantee that the system will not have a bug the day it needs to brake?
Trains brake very well. There are even rubber wheel metros that brake so well we have to limit them in order not to sent everyone flying in the wagons :).
But in the safety calculations, standards like IEEE 1474 assumes degraded braking capability, and also considers that the preceding train is at stop. In other words, to be declared safe for mass usage, you can't assume average case. You must assume adverse case. You will not have a driver to notice that the car brakes poorly or to be confident enough to drive very close to the preceding car.
For the second paragraph, again, this is certainly true for driving assistance, but will most likely not be enough for driverless, as it was / is not enough for trains. Of course you may disagree.
EDIT: this is a good reference: https://www.linkedin.com/pulse/safe-braking-model-explained-...
you are using appeals to authority and emotion rather than the scientific method.
I have no substantial comment other than to note that when a driverless car is insurable by Liberty Mutual at normal rates, with all liability held by Tesla, then it's probably reasonably safe.
However I don't see how the fears of an industry that needed to have extremely good results immediately need to be extended to one in which all signs point to progressive improvement. The danger being that the general public is fast to jump on blind trust, but to be compared with a vastly different ratio of number of people who are / can be in control to number of people affected, at least for cars and trucks which as far as I know are the first POIs in this industry. Autobuses might get automation slower than the rest of the fleet (no source, just from the top of my head).
First, you do not need self driving car to achieve 5x improvement. Good driver assistance systems will get you there.
Then, put yourself in the choose of the politician that is authorizing this technology to be used massively. Picture the amount of cars that are going to run everywhere once this authorization is given and ask yourself: what will happen to him once the first hundred people have died because of a wrong autopilot decision (it will be such figures, since you are OK with x5 reduction). Do you think he will be sleeping well at night? Do you think his political career will be better, especially if you consider my first point? Would you step in his shoes?
> You are willing to take the risk, but society as a whole will not.
What are you talking about? People very objectively ride planes that are not completely safe. Airlines that are nowhere near completely safe still get tons of customers.
But more importantly a couple paranoid regulators don't really represent "society as a whole", so you're not drawing on particularly relevant experience here when you talk about something as locked-down as rail or planes. With the constant looming death toll of human car crashes and state by state regulation there's a lot of room for getting these systems on the road.
But in the end, we all accept the risk of riding planes that we don't control because we entrust our lives to /trained pilots/ not because of such systems. As an illustration, the debate is still vigorous about whether or not a computer should be allowed to sit betwen the driver and the actuators . It is also the case for cars, especially after the Toyota blunders  so I do believe all this body of experience is relevant and cannot be easily "disrupted".
EDIT: theses standards are applied worldwide, despite the EN prefix.
EDIT: good reference site: http://en50126.blogspot.it/2008/07/velkommen.html?m=1
> Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver".
I don't necessarily think this is true (perhaps age-correlated?). Let's set aside the issue of whether or not we can do it, for now, and assume that we have a scenario where self-driving cars are safer than human drivers.
In this context, I can easily imagine a political campaign à la "Think of the children!" that paints human drivers as fundamentally unsafe, advocating for a self-driving mandate in urban areas. Perhaps with a Cash-for-Clunkers type of deal to aid the transition. I am not saying this is desirable, merely plausible; it has all the elements of good politics: an easily-grasped bright-line dichotomy, emotional manipulation, and massive corporate benefits (for vehicle manufacturers, self-driving software vendors, and transportation providers like Uber).
For the second part of your argument, there are multiple problems, again backed by examples in the railway world.
First, a politician will think twice before casting a devilish image of drivers. Having all the professional drivers against you is essentially a political hara-kiri. There is a reason why it took 10 years to migrate Paris Line 1 to driverless, and it is not technology.
Second, you would not believe how difficult it is to reach consensus on the fact that drivers are unsafe compared to a machine. Still today, even after decades of operations with no incident, there are still people in the industry that argue otherwise... The most telling example is the high speed derailment in Santiago de Compostela, essentially due to the fact that the driver is considered a good enough guarantee to drive a high speed passenger train up to 200 KPH... Sigh.
Not in the daily lives of the vast majority of Americans.
The three train systems I've used on a regular basis in the last half decade (NJ Transit's NE Corridor, Amtrak's Northeast Regional, and Caltrain) very much do not have driverless trains.
Two of those are in the top 10 commuter rail systems in the US.
I'm ranting here, but you're on an undeservedly high horse.
There has been 1 accident in 30 years attributed to driverless rail mass transit control system, despite millions of kilometers travelled and a constant worldwide growth. Yes, rail has understood what driverless safety means. You may call it a high horse but driverless /is/ a high horse. That's my point.
In a way, your comment confirms this success. You think it is easy because you are used to it at an instinctive level. And that is the goal, precisely.
But it is not as simple as what you imagine. For instance, trains have multiple degree of freedoms. As an illustration, train builders calculate carefully the dynamic enveloppe of the cars (e.g. its worst lateral deformation due to bogie flexibility) and check against the tunnel walls geometry.
> I don't think the majority understands what safety means in mass transportation.
And inferring that you meant that rail understands safety, generally, better than any other mode of transportation which I took issue with. I believe you 100% when you say that only one accident in 30 years ahas been attributed to driverless trains.
What I find horrifying about rail is how simple PTC should be: "am I exceeding the maximum speed for this stretch of rail? is there a train in front of me? if yes to either, slow down" and yet Amtrak says it will take billions of dollars and decades to install. I'm thinking of accidents like the NE Corridor derailment in 2015.
PTC is a problem of ROI for operators. The incentive is apparently not so great, so it will take decades. Also, and this is frequent in rail, any complicated solution that will leverage the existing infrastructure will be explored, further entangling the situation. In more controlled economies, the state has enforced for long the technology to be used.  Some may say it is a case where free market cannot be trusted to take the correct decisions. It is a societal debate, no longer an engineering topic. 
 Not too bad: https://en.m.wikipedia.org/wiki/Positive_train_control
> the derailment was caused by the train's engineer (driver) becoming distracted by other radio transmissions and losing situational awareness, and said that it would have been prevented by positive train control, a computerized speed-limiting system
Remember, I was initially referring to rail's approach to safety generally, not just in the context of automation.
The second you put the train in the open air where other people are, where weather is, where nature is, those things kill people on the regular.
Get out of your tunnel, and suddenly your "success" has nothing to do with the control system.
I don't want to sound obnoxious but driverless metros routinely operate in the open air :). Tunnels are not a mandatory condition for such systems. They manage strong winds, strong rains with reduced adhesion conditions, etc. They manage the possibility for people or non-protected trains to intrude on the track. A lot of brain was put into automating decisions related to fire scenarios or emergency evacuations. Metro lines like Paris line 1 are extremely busy. There are plenty of ways to kill people, yet they don't. Just the "stuck hand-bag" scenario is a nightmare to manage. Yet.
A driverless system is truly a massive piece of software. You guys should come in the tunnels and have fun with us!
This is demonstrably false. There are many metro systems with tracks on viaducts and at-grade (e.g. "open air"), with GoA 4 control systems.
These systems can and do detect objects on the track. They also deal with severe weather (heavy thunderstorms, snow, etc).
I think you're discounting the experience of delivering railway systems without considering lessons that could be learned in deploying them, and applying that to self-driving cars.
> If you substitute, you have to do much much much better. Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver". Is it fair? Maybe not, but that's the whole point for entrusting lives to a machine.
I've heard Elon mention this and while I don't know exactly it's measured, he claimed that fully autonomous cars would have to be 10x "better" at driving than a human before they would be allowed. I'm paraphrasing when I say "better" but I'm sure I could find the video of him talking about it.
At any rate, your comment is interesting.
I don't know where Elon gets his numbers from, but according to EN 50126 practioners, human error is in the range of 2E-4 / 1E-3 whereas safety functions are classified at least SIL2 and that means 1E-6. In other word, the system must be x1000 better than a human.
Here is a good reference:
If human flight was possible, surely they would have done it hundreds or thousands of years ago.
Your argument disregards not only advancement in theoretical knowledge, but also advancement engineering in the form of computational power and sensor sensitivity.
In the case of establishing the correct position of vehicles, whatever the technology used it will have an error of measurement. This error accumulates over time, especially if you are measuring displacement. You will need reference points to retain your positional uncertainty within an acceptable value. An autonomous vehicle uses a map containing features that can be used as reference points and triangulate its position based on that. A human brain does this constantly using its eyes, ears and memory. It is an example of diversity: you correlate the perceived displacement by your internal ear with what your eyes sees, using a reference learned in your brain about your surroundings and sensors' ability.
To reach 1E-9, we cannot rely on such self-learned things: we can agree that the probability for the landscape to change is quite higher than 1E-9. In the case of trains, driverless trains use coded beacons / loops or GPS diversity. Such techniques imply that the infrastructure around your vehicle collaborates to the safety of the system. Hence my statement.
This being said, the rail industry has been dreaming for long of a completely vehicle centric solution. There are high stakes for that: reduced costs, competitive advantage, etc. Last attempt here: http://www.alstom.com/products-services/product-catalogue/ra...
On the bad side, this risk culture destroys large scale innovation, and even safety in the long run. The problem is that we've adopted an attitude that safety is always first, meaning that it is immoral to do something in a less safe way, no matter what the other benefits might be. This means we get a regulatory, tort, and engineering culture that is willing to use existing systems, because they are grandfathered and therefore "reasonable", but will only adopt new systems if they can be shown to be perfectly safe.
This culture is fairly new. I date it to sometime in the seventies. Ralph Nader and the Pinto were both symptoms and causes. You can see the transition in, for example, how America responded to the Apollo 1 fire vs. the Challenger accident.
Since you're a a rail engineer, let's look at rail mass transit systems. The NYC subway has a limit of roughly 30 trains per hour, a two minute headways. All of the braking rates, margins of safety, and signaling systems do their job, and you never see two trains hitting each other. During rush hour though, trains are packed way over capacity, and this is mostly because of this headway limitation. If you were to imagine this on the freeway, you'd have to leave two miles between you and the next car. The cost for this level of safety is that about half a million people have to spend an hour every day in miserable conditions. Many of them choose to drive or take cabs instead, to avoid this.
When they make this decision, none of them even give the slightest thought that driving is maybe 10x more dangerous than taking the train. They understand that safety for both is plenty good, and the way that they spend two hours of their day, 13% of their waking hours, is a lot more important than some tiny difference in their risk of dying on the way to work.
Ultimately, while very well intentioned, this safety culture is inhuman. It's pessimistic. It says "nothing in your life could possibly be so important that it's worth any possibility of you being injured. "
So, this is what we face now, over and over. With self driving cars, it's, "sure, the chance of you getting killed is half as much as if you had driven, plus you get all of that time back to have a nice conversation, read a book, or idly stare out the window, but since it doesn't meet our aviation/rail transit level of safety, you can't have it." You even see it with kids -- your twelve year old kids can't be allowed out of the sight of an adult, or you're a dangerous, neglectful parent. It does not matter that childhood is the process of learning how to be an adult, and that becoming an adult is a process of progressively mastering greater and greater freedoms -- the most important thing is that we are never seen to expose children to even the most minute level of risk.
I really want us to have a real conversation about acceptable risk, informed consent, and human progress. We need it badly to regain our souls.
So true. Its an issue in General Aviation where there has been huge progress in safety devices (electronic cockpit gauges, airbag seat belts, etc) but it has been illegal to install them in older aircraft because of the FAAs very slow, expensive certification process. Finally, in the last year or so, the FAA got serious about what's called "Part 23 reform" which will vastly streamline the process for safety upgrades in older aircraft.
Also, I don't want this comment to be interpreted as shitting on the FAA. I'm a libertarian leaning former liberal who generally has very low confidence in our government but I consider the FAA, with their insane safety record of which I'm a massive fanboy, to be a exception.