Hacker News new | comments | ask | show | jobs | submit login
Tesla Autopilot and Euro NCAP (autoevolution.com)
66 points by camtarn 40 days ago | hide | past | web | favorite | 125 comments



The article claims that the NCAP's problems with automated systems was that "apparently they work too well."

It then goes on to enumerate a host of problems with Tesla's Autopilot, including a lack of geofencing, poor monitoring of human attention, "late or no response" to cut-in-and-out scenarios, AutoSteer's resistance to human control, and the system's trouble with some curves, and of course Tesla's inconsistent and confusing messaging about what exactly Autopilot is meant to be.

Far from being a "trashing," it sounds like the Euro NCAP did exactly what it was supposed to do, which is to advocate for safety.


Switch “advocate for safety” with “defend the interest of german car manufacturers” and then I’ll buy it


I really don’t like geofencing or aggressive human attention monitoring so the lack of both of those things are features. The steering wheel attention requests are pretty frequent anyway.

The other complaints are largely remedied by being a safe driver - the car says all over the place what the limitations of autopilot are when you go to use it.


Did you read about the case where the driver was sleeping and the car did not stop because it did not realize it? Do you think this is a feature?


> Do you think this is a feature?

If you're talking about the recent case where police followed the car for 7 minutes and brought it to a controlled stop then this is pretty much a resounding yes! If I fell asleep at the wheel of my car I'd more than likely be dead within seconds.


I don’t think that’s what has happening given how much it asks you to touch the wheel.

Same thing could happen with cruise control and be even more dangerous.


What do you mean? I imagine the driver was sleeping with the hands on the wheel.

IMO this systems that work for 99% of cases but stil require 100% driver attention are dangerous and it should include as much software and hardware to make sure the human is paying attention and not sleeping or watching movies


It doesn't really work for 99% of cases, it works for maybe 60% and most of that is uninterrupted freeway driving without construction zones.

I think it'd be hard to sleep with your hands on the wheel and cause enough resistance to keep the car from complaining. Though I suppose it's possible - I thought the driver was drunk in that case (could be mistaken)?


From what I read the drunk driver is a different guy then the sleeping driver, you can google for more info.


The officers had visuals on the car for at least 7 minutes. It's exactly what happened: the guy was asleep at the wheel of his Tesla.


Also, I'd assume that if police officers noticed one such case it's pretty plausible there have been and will be other, unreported cases where people got lucky.



Some don’t like seatbelts. But if your concern is safety priorities change.


Motorcycles are still legal even though they’re far more dangerous than sitting in a car without a seatbelt.

There are reasonable trade offs to make.


driving without a seatbelt is not just more dangerous for you; it's also more dangerous for the people around you in the event of an accident, because your body can become a dangerous projectile.

obviously the projectile argument also applies to a motorcycle, but for the most part only the rider is exposed to more risk when they ride a motorcycle.



Having used it for about 24K miles, I think Tesla AutoPilot works incredibly well (using software 9.0 and AP2.0 hardware in my Model S).

And I'm really glad it isn't crippled with some geofencing that would limit it to where I could enable it.

Having said all that, the name is a terrible and implies something it isn't capable of yet. It's a dangerously bad marketing choice.


I own latest 5 series bmw and paid for all the driving assistance add-ons, and while the adaptive cruise control is awesome, the line keeping assist is completely useless and works absolutely terrible, not sure who decided to charge money for this incomplete piece of software, it's unusable, my friend has a model X and it's a day and night difference between the two, Tesla autopilot is just in a different league compared to bmw line assist thing.


I found a use for it where I did not expect. When I took my first big trip with my TM3 I was on a two lane road in Western Ohio at night, in the pouring rain, on a two lane country road. Speed limit was 55mph and while exceeding it it was not uncommon to get passed.

The problem I faced was the glare from oncoming cars and trucks some not dimming their lights. So I set AP to the task for keeping me in the lane and when someone came at me too bright to see I could focus on the white lane much and the AP screen at the same time assured I was in my lane.

is it perfect, oh hell know. it will do tail of the dragon in North Georgia provided you keep it at the speed limit but it can be a bit sketchy as it likes the center line a bit on two lane roads. I do know its not meant for that but when safe I had to try.

General use I mostly rely on traffic aware cruise control to keep a safe distance and speed and do the steering only letting it steer when fumbling for something in the car.

edit: on tm3 it can be cancelled by tapping brakes or tapping right stalk up. it is so easy to cancel so I do not understand NCAP's issue there. did they not bother to read on how to cancel it?


Tesla should just rename their ADS from AutoPilot to CoPilot.


That assumes benign intent. All that we have seen so far is a sales pitch finetuned juuuust on the line "we won't say it's autonomous (or that it's not), but we'll nudge suggestively, point and nod." Of course, when anything happens, the old spiel "we didn't actually say it" comes out.


I understand the argument that Tesla is generally misleading is their selling of Autopilot. I can also understand the argument that no company should be selling this type of semi-autonomous tech that can result in the worst of both human and autonomous driving. But I simply don't understand the argument that the only problem with this feature is the name. It just makes the whole thing seem silly.

We already put numerous requirements on people before they are allowed to legally drive. Why are we not blaming drivers for knowing nothing about their cars beyond the simple name of a feature? You literally can't turn the feature on in a Tesla without it warning you about its limitations. Doesn't the driver have some responsibility to pay attention to those warnings or at least spend a few minutes learning about this feature before they trust it with their lives (and the lives of others)?


I agree, drivers that do not pay attention share the blame too, but is not fair to blame only the drivers, the "autopilot" requires that the human driver maintain 100% attention but is the system monitoring the driver properly, is it trying the best to keep the driver engaged to the road ? No it does not do that because is not comfortable for the customer, so safety is sacrificed.


Or don't borrow from other industries making meaning ambiguous and just call it for what it is: Driver assist


Isn't the point of a copilot that it can safely fly and land a plane unlike an autopilot.


Maybe, but I think that's beside the point. The point is that it's about people's perception of what something should do from it's name. And 99% of the people will associate co-* with a help and auto-* with something that can do it on it's own.


Planes can land entirely on autopilot.

Just requires the airport to have ILS, and a fairly modern autopilot.


The problem is that most people don't understand what the term "autopilot" refers to. In airplanes, autopilot can safely control the aircraft in a limited set of circumstances. It can't control the aircraft on the ground, nor can it control the aircraft during takeoff. And it can only land the aircraft on certain types of runways. Crucially, it requires human pilots to be monitoring the progress of the aircraft at all times because there are many situations that can arise while the system is engaged that require a corrective response. This seems to be no different from the Tesla system. Of course, due to the nature of driving on a crowded road versus being up in a relatively empty sky, there's a significant difference in the time available to apply these corrective responses.


Autopilots can in practice control ground manoeuvres, and take-off. That was demonstrated in the 1970s. There simply isn't the commercial impetus to push them to certification; it's cheaper for an airline to cancel a flight than to maintain such a technical capability, mainly due to paperwork.


Some can. Airplane autopilots are tremendously varied. Some are as simple as a gyroscope hooked up to the ailerons to keep the wings level. I’m always baffled as to why people think “autopilot” implies any sort of autonomy.


> I’m always baffled as to why people think “autopilot” implies any sort of autonomy.

It is literally in the name.


“Auto” just means “self.” Do autobiographies write themselves? Does the word “automobile” imply no driver is needed?


It doesn't "just" mean that. You aren't arguing in good faith when you choose to selectively use and ignore dictionary definitions of common words as it conveniences you.


Auto can mean also automatic, autonomous , you are trying to defend the meaning of autopilot by bringing aviation definition, it would be similar to me telling all the people that they are using language wrong because in chemistry or math the word X ids defined to mean something a bit different.

in physics acceleration is used when things slow down too, but at least in my language this is weird and in day to day use acceleration means increase in speed, you want to understand how/why most people define autopilot as more then dr5iver assist then just ask them and stop telling them they are wrong because in domain X the word is used different.


I’m not saying they’re wrong. I’m saying their understanding of the word baffles me. That’s not the same thing, and it’s kind of annoying to be attacked just for not understanding people.


Let me try to explain, people are not knowledgeable on how airplane autopilot works, what people understand when they hear autopilot is from what they had seen in movies or games, read in books or what the words actual mean.


There are contrary examples that normal people would know too. For instance, it’s common to use the phrase “on autopilot” to describe the state of a person who does something really stupid because they were following a routine without thinking.


No but autobiographies are self-biographies, automobiles are self-mobile so an autopilot would self-pilot, i.e. pilot itself


And they do, but that doesn’t mean they’re autonomous or capable of handling any situation. A wing leveler is piloting itself.


Agreed. So "Auto Pilot" == "Self Pilot".

As in, it pilots itself. Which would indeed imply that no driver was needed.


Probably because that's what the name suggests.


as a Greek and Latin student, there are quite a lot of English words that don't quite mean what I would expect. if it's really important that I understand how a word is normally used in English (eg, some critical safety context), I look it up.


there are words and expressions well know from pop culture, like say time travel, do you need to look up some physics book to find the definition or find some experiments where they prove time travel is or not possible ?


I don't usually make $40,000 decisions that depend on time travel.


How do you research it though? Do you read reviews, watch videos or you search online for the technical manuals with all the details before you buy ?

The reviews and videos may not contain all the details you will get in the technical manual that you don't have before you get the car.


if I'm spending that much money on something I can die in, you better believe I'm reading reviews in all the major car publications (and btw, all the in depth reviews do a pretty good job explaining what you need to know about tesla autopilot), watching reviews on YouTube, seeing what owners have to say on the relevant forums, etc. If the car has some new safety feature that I don't understand, I'm definitely reading everything the manual has to say about it before I drive off the lot.

then again I'm one of those weird people who actually takes the recommended safety precautions when I do dangerous things. I also do crazy stuff like checking my tire pressure.


Yesbut. It also requires the meat-sack to be sitting in the cockpit telling the autopilot which heading/altitude/waypoint to head to next, listen to ATC instructions, input the correct ILS frequency into the system, etc. And that's when everything goes absolutely fine.


Planes can absolutely land themselves. The only point you're making is that an autopilot cannot entirely replace a pilot, but that was never in contention. The poster above claimed an autopilot cannot land, it can, and it does every single day.

Additionally ILS frequency can be automatically set via electronic charts, particularly if the flight plan contains the destination runway (which isn't always possible).


> Planes can absolutely land themselves.

No. They can’t. Not unless you have a IIIc approach and autopilot. Those things are not common.

If you don’t have both of those things, the pilot must, at minimum, fly the flare (for a IIIb approach, which are somewhat common at large airports).

Even if you have a IIIc approach, the pilot still monitors systems, deploys high lift devices and landing gear, makes the go/no decision. There is no reasonable interpretation of “planes can absolutely land themselves” which is in any way true.


> No. They can’t. Not unless you have a IIIc approach and autopilot. Those things are not common.

So you're saying they can?


Wikipedia has an accurate summary: "[in all cases a CAT III autoland requires the] flight crew to monitor the system behavior very carefully and be ready to take control immediately." Oops.


Hah, “just”.

Very few airports have even one cat IIIc approach. Not all modern airliners have an autopilot capable of a IIIc approach (which requires triple redundancy, IIRC).

And you accuse others of arguing in bad faith...


So rename it to "student pilot" and change "driver" to instructor? This is why I don't work in marketing...


The point is that a copilot is there to assist the primary pilot, not be that primary pilot all of the time - even if in emergencies a copilot can take over.

Frankly - if you're asking for emergency "landings" I'd say modern driver assists are probably up to the task. Most have a pretty decent chance of bringing a vehicle to a safe stop. And if it's an emergency, and the primary driver cannot do so - then the currently still inevitable unreliability isn't much of an issue - it's still much, much better than nothing.

Copilot isn't a bad name change. Not that they'll ever do it.


This is about cars, in cars driving copilot means something different then airplanes.


Or link it with cruise control like other manufacturers are doing. The general public kind of gets what cruise control Does, and that sets the right expectation. The public really has no idea what an aviation or marine autopilot system is all about.


I would be OK with regulations requiring them to call it "adaptive cruise control with lane assist" and forbidding automakers from coming up with their own names for it.


Yes, having tried it, I'm a big fan, but they should have called it "Ultra Cruise" or something like that.


Its really Lane Keep assist and Dynamic radar cruise control. Its better than most manufacturer's systems.


This isn't true at all with Navigate On AutoPilot which was released a month or so ago.

You set your destination on the GPS and it will take you exit-to-exit on highways.

This is not lane assist or distance control you see on other cars.


Bolting a GPS-driven highway exit selection onto LKAS+ACC isn't really hard or risky. Once a control system can reliably keep a car safe in freeway traffic including lane keeping (even near gore points), lane changes, matching speed with traffic, and keeping safe distance from the car in front... just picking an exit given a destination address is the easy bit. It's a classic case of Moravec's paradox.


You still have confirm and requires active human participation. Its not like its really "Auto."


You actually don't have to confirm if the exit lane is split from your lane, which is pretty common. It will signal for you and get you into the appropriate lane.

If reaching your destination requires switching lanes into one that is not split from your own, you'll have to confirm it. But that may change in the upcoming weeks or months.


It will also change lanes for you if you signal, and wait for a safe opening if necessary.


This would have also allowed for more fanfare with a fitting name if/when they eventually release a fully autonomous system.


Hopefully one of the (European) consumer protection agencies require them to reduce their misleading marketing soon.


Not that discussion again.

https://en.wikipedia.org/wiki/Autopilot


It doesn’t matter what the technical definition is in aviation. What matters is what expectations about Tesla’s technology it creates for most people.


One would argue the point of marketing is to changes people's expectations.


Euroncap literally flagged it as an issue in their report. So when you say "not that discussion again" are you asking people not to discuss the actual topic of this thread?


Do you think that most people know how airplanes autopilot works? What people know is that airplanes can drive and land themselves and human pilots are there as backup/safety.


Airplanes have autopilot and no one thinks that means there’s no pilot in front. Sure some drivers test the limit of their Tesla, just like some drivers test the limit of their Porsche or Ferrari.


Some scientists did some "Teleportation" experiements with photons, this does not mean that when people think abut what teleportation means they think about photons instead of teleporting people like in Star Trek,

Similar autopilot for cars does not make people think about a card that drives itself but the driver must also be 100% concentrated on the road, the name implies something different and Tesla marketing knows that , the fans will try now to defend this bad choice with some bad examples from aviation, you need to go and ask people on the street what a car with an autopilot should be capable of.


I really don't want to live in a world where my moral character is judged by whether some random people on the street think of the right thing when they hear the name of my technical feature.

there is an inherent tradeoff between accessibility and precision in language. can we err a little bit on the side of precision in a technical field?


You can call it whatever technically-correct term you want in the code, the FCC documentation, technical conferences, etc.

When you're marketing it to the public, it's best to worry a little about how terms will be interpreted by laypeople, especially if said laypeople's lives are in the balance.


This is a silly semantic argument. Many people drive cars; very few fly aircraft. The meaning of a word in the second context has little to do with the first.


I don't understand why you're downvoted. The reason we leave the plane nomenclature to the pilots is that we know that pilots are trained professionals and understand what 'autopilot' implies and what its technical limits are.

Unless we expect the same rigor of people operating a vehicle, advertising matters. There aren't exactly a lot of pilots who will take the hand of the wheel and read a comic because they mistake the capabilities of the plane


Please elaborate. In what more common context does autopilot imply no standby human operator?


Standby, yes. Tesla's Autopilot appears to not be able to guarantee that sub-second reactions to random situations aren't necessary, at which point it's IMHO not "standby" anymore.

An airliner's autopilot can do this for phases of the flight, backed up by ATC, collision warning systems, height over ground and the relative emptiness of the sky.

In other situations, it needs close supervision too, but the first case probably leads the general public to associate the term with "standby".


In that case, the analogy to airplane autopilot is indeed inappropriate.

Airpline autopilot can have a standby human operator, with more than a second transition time.


It does not mean it must exists for people to know what it means, if I say "teleportation transport" you know what it means without having to seen one working before. Why you don't ask some people what an autopilot capable car should be able to do, then ask what a lane assist capable car should do, then see if the answers match.


I didn't say it had to exist. I asked "in what context".

If I understand you right, the answer is "in the theoretical context"?


Say in movies,books, pop culture context autopilot means something that drives itself.


Aircraft autopilot:

1. Maintains a planned course without responding to obstacles.

2. Allows the pilots to safely take their hands off the controls and focus their attention elsewhere, such as on the radio.

Tesla autopilot is similar in property (1), but different in property (2) because aircraft are 'protected' by the Big Sky Theory.


Aircraft autopilot can automatically avoid collisions based on TCAS data, for recent airplanes (https://www.airbus.com/newsroom/press-releases/en/2009/08/ea...)


Sure. That is, however, an emergency system, and you are already in deep trouble if it triggers.


Autopilots have a very specific and determined role in aviation.

Pilots are also trained extensively on its capabilities and limits (and all other 'auto-' systems on airplanes).


Autopilot was a major factor in Air France's deadliest crash: https://en.wikipedia.org/wiki/Air_France_Flight_447


Airlines have also had to do special training courses to teach pilots not to get over-reliant on automation. Tesla's "autopilot" seems to fall in that category of being just enough automation that it gets people complacent, but not enough to actually let them safely be complacent.


> Airplanes have autopilot and no one thinks that means there’s no pilot in front.

If we're going to make that comparison, then we should also factor in the fact that you're not going to be allowed to use an aircraft autopilot without first being trained on its operation, limitations, etc.


I think a lot of the general public thinks an airplane’s autopilot causes the airplane to do a lot more than it actually does. The name is not helping.


That argument holds water the day Joe schmoe driver undergoes hundreds of hours of advanced training.


In airplanes the pilot is never watching a movie on the phone while the autopilot is engaged.


No one thinks the pilot has to be ready to take over at a moment's notice and be constantly paying careful attention to what is happening in front of the plane though.


Except for maybe pilots, anybody who has trained to be a pilot, or family or friends of pilots?

The nicer thing in the air is that, if autopilot fails, you're typically thousands of feet up.

Nobody's mentioning that autopilot in airplanes does nothing for collision avoidance. GCAS is collision avoidance and had its issues, but is considered well past autopilot.


>> The nicer thing in the air is that, if autopilot fails, you're typically thousands of feet up.

I'm not sure "nicer" is really the right word, here. I get what you're saying- there's time to react. But, that is time to react before you hit the ground. Not quite "nice".


Everyone who knows anything about aviation thinks the pilot has to be ready to take over at a moment's notice and be constantly paying careful attention to what is happening in front of the plane. Autopilots merely lower the pilot workload. Even with the autopilot engaged, the pilot is still actively flying the aircraft.


> Overall, the results were satisfying. In Tesla’s case, they were disastrous. And not because ADS don't work, but because apparently they work too well.

Misleading title much?


I don't know; they seem to be trashing Autopilot plenty in the article.


I disagree. The article concludes "Tesla's Autopilot is a great technology, perhaps the most advanced ADS on the market". What they are "trashing" seems to be the hyperbolic marketing of Tesla, which I feel at this point has been discussed ad nauseum and has gotten so much press that this just feels like a rehashing of "Tesla greatly oversold this!" that tons of people (myself included) have already commented.


It is not only marketing the criticize, also the general handling:

"In the small obstacle ‘pot hole’ scenario, all the cars tested allowed the driver to cooperatively steer and manage the situation apart from the Tesla. The Tesla system does not allow the driver to deviate from the lane centering path and will disengage when a driver inputs steering torque."


Their most important point stems from the name: "Autopilot". That right there begins the host of issues that arose from how Tesla marketed their features in a way that oversold them. And I am genuinely impressed by what Tesla has accomplished with their system, it's just that it's safe use-cases are much, much narrower a true "autopilot"

I wonder if there is even a safe "in between" phase between normal driving and truly, fully self-driving vehicles. Or will many people put too much trust in the tech to make it a safer alternative until it's much more mature. Think of airplane pilots, where "autopilot" is also not really a fully self-flying/landing/takeoff system: they have tons of specialized training to hammer home the limitations and work safely within the system. There's even a body of research literature on the topic of when too much automation makes pilots too complacent, and decades of engineering with that problem in mind.

It all just makes me think that Waymo may have it right in not targeting any type of transitional state between traditional and full automation.


To be fair, the Tesla Autopilot functions at a higher than an aviation autopilot.

In aviation, an autopilot follows a fixed path and makes no attempt to avoid other aircraft or terrain.

There is no such thing as an automated takeoff. Automated landings, in practice, require the pilot to hand-fly the last part of the approach (except at a fleetingly small number of airports with cat IIIc approaches, where a ground navaid, not GPS, provides a high precision signal to guide the autopilot).

State of the art automation of aircraft speed consists of the pilot dialing a different speed to the autothrottle, which is effectively a traditional, non-adaptive cruise control.


> I wonder if there is even a safe "in between" phase between normal driving and truly, fully self-driving vehicles. Or will many people put too much trust in the tech to make it a safer alternative until it's much more mature.

the prevailing opinion on this forum seems to be that these semi-autonomous driver aids are inherently unsafe because the driver inevitably stops paying attention. but are they actually worse that a typical human driving manually? I know we've seen a bunch of high-profile news stories about tesla autopilot malfunctions and inattentive safety drivers, but is there any real data to suggest that these systems are as bad as portrayed?


This is going to be controversial perhaps, so I apologise in advance, but I'm continuously struck by the tendency of the tech industry (some sectors of it anyway) to overstate matters. Tesla's driver assistance system is an "autopilot"; Waymo's rides with human (safety) drivers are "autonomous"; DeepMind's AlphaZero that plays chess, Go and shogi [1] is a "general-purpose game playing agent"; Google translate that uses English as an intermediary language has developed an "interlingua".

I really don't get how this works. To me all this sounds like completely ridiculous. Yet, some of those claims appear in actual publications, in high profile journals. Otherwise, people fall over each other to buy that hot AI stuff. I mean, what the hell?

Perhaps we must persist in educating people about the abilities, and limitations, of current technology?

______________

[1] And nothing else. Not even tick-tack-toe. It has a model of a board with squares and of pieces making chess-like moves, so it can't deal with anything else.


Tesla is going a different route to full automation than any other company. They seem to think the fastest way to full automation is going to be a gradual progression from where they are now (low level SAE 3) to level 4/5. Most others, and most importantly the leader Waymo (Google (Alphabet)), have decided the human society can't handle the risks of a level 3 system and won't deploy cars with those features. Waymo seems to have got to level 4 for the driving conditions around Phoenix so they are now starting to deploy the system to the public.

At level 3 humans aren't driving the car but are suppose to pay attention so that they can take over if a problem arises. In general humans are not good at that task. Some of them will fail at it, the car will crash, people will get hurt/die, and maybe in this "safety first" world, automatic driving gets a bad name and is "set back" a decade.

Elon Musk has stated that he believes more in a "safety third" kind of philosophy. In an interview I saw he stated that with his kids he tells them that "don't panic" is a good first principle for dealing with life and "safety" would probably go third. It's important but not the prime directive of life. What is second is still up for consideration.

So the love/hate for Tesla's Autopilot strategy probably comes down to how you feel about risking people's lives to possibly save many more. Here is a question to ask yourself. If Tesla's autocar strategy causes 100 deaths of people who volunteer to take the risk before really good automation is developed, but Tesla's courage/recklessness has a 50% chance of saving a million (choose your own number) lives due to the speed-up of the implementation of self driving cars, would you support that? I would and so would many others, but many would not. This is a deep difference in values and that is why I think there is so much love/hate around the issue.

Maybe Tesla wants to call it Autopilot because they hope they can incrementally improve it so that someday they get to level 4/5 and won't have the need to change the name at some arbitrary place along the way to full autopilot competency. Cruise control was a bit of a misnomer also but people seem to be fine with the word at this point.


Autopilot is level 2 autonomy.

Noone has level 3 autonomy, yet.

https://en.wikipedia.org/wiki/Automated_driving_system#Level...

Level 2 – Partial automation The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle.[6] Parking assistance is an example of a system that falls into this category[7] along with Tesla's autopilot feature.[8] A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz.[9] It is important to note the driver must not be distracted in Level 0 to Level 2 modes.


Reading over the SAE (J3016) Automation Levels here[1] it seems to me that the Tesla Autopilot is now into level 3, especially for highway driving. If you can go exit to exit without any direction from a human, I would definitely say level 3 has been reached. Fine details of the definition between one level and another should probably only interesting to lawyers and their clients.

Waymo not level 3? Seems they claim to be level 4 for their Phoenix rollout.

[1]https://en.wikipedia.org/wiki/Self-driving_car


Yes, they do. That's their marketing schtick. That their cars are already fully autonomous.

However, that's just preposterous. It's a bit as if Boeing claimed their planes have anti-gravity because they're just better planes (I know next to nothing about planes- Boeing's name chosen completely at random). Waymo are not particularly farther than anyone else on the path to fully autonomy- they're just doing the smart thing and not trying to drive where their system can't, or leaving it to the public to test their limits.


More like Boeing claimed to have anti-gravity that only worked in Seattle. Very exciting, really anti-gravity, but not yet useful for full scale implementation on planes that need to fly everywhere.


your wikipedia link 1) puts Tesla on level 2 based on an autopilot update from 2015 (check reference [8]). It's really outdated. 2) puts Google on level 4.


Yes, apologies for that. The wikipedia article is not very well researched. Another article (on fatalities) states that the Uber car that killed Elaine Herzberg was level 4, which is patently absurd.

The following article, in The Conversation, has a better overview:

https://theconversation.com/what-are-these-levels-of-autonom...

And here's the SAE document the five levels are based on (actually from the wikipedia article's sources):

https://web.archive.org/web/20161120142825/http://www.sae.or...

It's easy to end up splitting hairs though because level 2 and 3 are not very clearly defined, and anyway, these are very vague definitions without a hard test to check them.

The vagueness allows tech companies to claim level 3 automation, but in that they're trying to push the boundaries of marketing- not of the technology.


The difference is, Tesla is a car company and Alphabet isn't. Tesla is hoping autopilot can help them ship units, and Alphabet is hoping they can make the tech first and either build a car around it, or, more likely, license it to people who know how to make cars. I'm not sure either is altruistic.


TL/DR

Tesla ADS is the best on the market but it is marketed as an autopilot, which is wrong and caused some overconfident owners to die.


Your comment makes it sound like the two mistakes are equivalent.

They might have been overconfident, but if the misleading marketing leads to deaths, I don't think the finger should be pointed to "overconfidence".


>Your comment makes it sound like the two mistakes are equivalent.

It's a TL;DR. Of course it sacrifices nuance for brevity.

>They might have been overconfident, but if the misleading marketing leads to deaths, I don't think the finger should be pointed to "overconfidence".

I generally agree with you but there is a good argument to be had that the driver is responsible. These were personally owned vehicles, not rentals. It's not like the owners who died didn't have time to become familiar enough with the system to reach the same performance conclusions (i.e. "it's basically just really good lane keeping and adaptive cruise") as Euro NCAP did. I get that products for the general consumer are supposed to be stupid proof but everyone knows that marketing bends the truth by only ever showing the ideal case. Nobody expects fancy AWD and electronic nannies to let them navigate snow covered hairpin roads with 100% accuracy every time like in the car commercials. Nobody expects to rely on automatic braking or cross traffic detection to not get in accidents when backing out of a driveway. The same goes for Tesla's autopilot. It is unreasonable to expect Autopilot to perform the way the marketing implies in 100% of circumstances.


>It's not like the owners who died didn't have time to become familiar enough with the system to reach the same performance conclusions

This means if a Tesla kills me in the first 2 weeks I am not responsible since as per your argument I did not had enough time to realize the marketing deception and the actual capabilities of the car, it is not like I was trained like real pilots to drive this specific car model.


Article from October.


Man, those six weeks completely changed the self-driving landscape, what with the... um... nah it's about the same but Waymo is now charging for rides.


For those people with shares it matters. New information changes share price. Old information tends to be reflected already.


the title is extremely inaccurate


[meta] Whenever there is something remotely critical of Tesla, the comment section is a mess. Downvotes left-and-right, people are divided on bullshit semantics, etc.. I really don't get why. Same thing (but worse, actually) happens on Reddit.

Is Tesla PR so 'well connected' to astroturf basically everywhere or did their marketing succeeded in creating an army of people who emotionally associate Tesla with the future of humanity? It's just a car folks, take a chill pill.


I don't think Tesla needs to astroturf, they have plenty of supports who will happily defend them for free.

Similar things happen in Apple threads, sports threads (elsewhere), and the games console wars. People just don't like their chosen sacred cow getting criticized.


You didn't specify which semantics are "bullshit", but I wouldn't lump discussion about autopilot into that.

For example, part of the argument over the feature being termed autopilot has been that it supports misleading statements from salespeople—the least "semantic" issue with the name I am aware of.

A feature designed to assist in life-or-death situations shouldn't be named in a Darwinian way, harming those who assume more than should be assumed.


What a clickbait headline. The trashing is...wait for it.. because of the name Autopilot and because it is too good. Saved you a click




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: