It then goes on to enumerate a host of problems with Tesla's Autopilot, including a lack of geofencing, poor monitoring of human attention, "late or no response" to cut-in-and-out scenarios, AutoSteer's resistance to human control, and the system's trouble with some curves, and of course Tesla's inconsistent and confusing messaging about what exactly Autopilot is meant to be.
Far from being a "trashing," it sounds like the Euro NCAP did exactly what it was supposed to do, which is to advocate for safety.
The other complaints are largely remedied by being a safe driver - the car says all over the place what the limitations of autopilot are when you go to use it.
If you're talking about the recent case where police followed the car for 7 minutes and brought it to a controlled stop then this is pretty much a resounding yes! If I fell asleep at the wheel of my car I'd more than likely be dead within seconds.
Same thing could happen with cruise control and be even more dangerous.
IMO this systems that work for 99% of cases but stil require 100% driver attention are dangerous and it should include as much software and hardware to make sure the human is paying attention and not sleeping or watching movies
I think it'd be hard to sleep with your hands on the wheel and cause enough resistance to keep the car from complaining. Though I suppose it's possible - I thought the driver was drunk in that case (could be mistaken)?
There are reasonable trade offs to make.
obviously the projectile argument also applies to a motorcycle, but for the most part only the rider is exposed to more risk when they ride a motorcycle.
Test method and results with links to every model https://www.euroncap.com/en/vehicle-safety/safety-campaigns/...
Tesla (.pdf) https://cdn.euroncap.com/media/41590/euro-ncap-automated-dri...
And I'm really glad it isn't crippled with some geofencing that would limit it to where I could enable it.
Having said all that, the name is a terrible and implies something it isn't capable of yet. It's a dangerously bad marketing choice.
The problem I faced was the glare from oncoming cars and trucks some not dimming their lights. So I set AP to the task for keeping me in the lane and when someone came at me too bright to see I could focus on the white lane much and the AP screen at the same time assured I was in my lane.
is it perfect, oh hell know. it will do tail of the dragon in North Georgia provided you keep it at the speed limit but it can be a bit sketchy as it likes the center line a bit on two lane roads. I do know its not meant for that but when safe I had to try.
General use I mostly rely on traffic aware cruise control to keep a safe distance and speed and do the steering only letting it steer when fumbling for something in the car.
edit: on tm3 it can be cancelled by tapping brakes or tapping right stalk up. it is so easy to cancel so I do not understand NCAP's issue there. did they not bother to read on how to cancel it?
We already put numerous requirements on people before they are allowed to legally drive. Why are we not blaming drivers for knowing nothing about their cars beyond the simple name of a feature? You literally can't turn the feature on in a Tesla without it warning you about its limitations. Doesn't the driver have some responsibility to pay attention to those warnings or at least spend a few minutes learning about this feature before they trust it with their lives (and the lives of others)?
Just requires the airport to have ILS, and a fairly modern autopilot.
It is literally in the name.
in physics acceleration is used when things slow down too, but at least in my language this is weird and in day to day use acceleration means increase in speed, you want to understand how/why most people define autopilot as more then dr5iver assist then just ask them and stop telling them they are wrong because in domain X the word is used different.
As in, it pilots itself. Which would indeed imply that no driver was needed.
The reviews and videos may not contain all the details you will get in the technical manual that you don't have before you get the car.
then again I'm one of those weird people who actually takes the recommended safety precautions when I do dangerous things. I also do crazy stuff like checking my tire pressure.
Additionally ILS frequency can be automatically set via electronic charts, particularly if the flight plan contains the destination runway (which isn't always possible).
No. They can’t. Not unless you have a IIIc approach and autopilot. Those things are not common.
If you don’t have both of those things, the pilot must, at minimum, fly the flare (for a IIIb approach, which are somewhat common at large airports).
Even if you have a IIIc approach, the pilot still monitors systems, deploys high lift devices and landing gear, makes the go/no decision. There is no reasonable interpretation of “planes can absolutely land themselves” which is in any way true.
So you're saying they can?
Very few airports have even one cat IIIc approach. Not all modern airliners have an autopilot capable of a IIIc approach (which requires triple redundancy, IIRC).
And you accuse others of arguing in bad faith...
Frankly - if you're asking for emergency "landings" I'd say modern driver assists are probably up to the task. Most have a pretty decent chance of bringing a vehicle to a safe stop. And if it's an emergency, and the primary driver cannot do so - then the currently still inevitable unreliability isn't much of an issue - it's still much, much better than nothing.
Copilot isn't a bad name change. Not that they'll ever do it.
You set your destination on the GPS and it will take you exit-to-exit on highways.
This is not lane assist or distance control you see on other cars.
If reaching your destination requires switching lanes into one that is not split from your own, you'll have to confirm it. But that may change in the upcoming weeks or months.
Similar autopilot for cars does not make people think about a card that drives itself but the driver must also be 100% concentrated on the road, the name implies something different and Tesla marketing knows that , the fans will try now to defend this bad choice with some bad examples from aviation, you need to go and ask people on the street what a car with an autopilot should be capable of.
there is an inherent tradeoff between accessibility and precision in language. can we err a little bit on the side of precision in a technical field?
When you're marketing it to the public, it's best to worry a little about how terms will be interpreted by laypeople, especially if said laypeople's lives are in the balance.
Unless we expect the same rigor of people operating a vehicle, advertising matters. There aren't exactly a lot of pilots who will take the hand of the wheel and read a comic because they mistake the capabilities of the plane
An airliner's autopilot can do this for phases of the flight, backed up by ATC, collision warning systems, height over ground and the relative emptiness of the sky.
In other situations, it needs close supervision too, but the first case probably leads the general public to associate the term with "standby".
Airpline autopilot can have a standby human operator, with more than a second transition time.
If I understand you right, the answer is "in the theoretical context"?
1. Maintains a planned course without responding to obstacles.
2. Allows the pilots to safely take their hands off the controls and focus their attention elsewhere, such as on the radio.
Tesla autopilot is similar in property (1), but different in property (2) because aircraft are 'protected' by the Big Sky Theory.
Pilots are also trained extensively on its capabilities and limits (and all other 'auto-' systems on airplanes).
If we're going to make that comparison, then we should also factor in the fact that you're not going to be allowed to use an aircraft autopilot without first being trained on its operation, limitations, etc.
The nicer thing in the air is that, if autopilot fails, you're typically thousands of feet up.
Nobody's mentioning that autopilot in airplanes does nothing for collision avoidance. GCAS is collision avoidance and had its issues, but is considered well past autopilot.
I'm not sure "nicer" is really the right word, here. I get what you're saying- there's time to react. But, that is time to react before you hit the ground. Not quite "nice".
Misleading title much?
"In the small obstacle ‘pot hole’ scenario, all the cars tested allowed the driver to cooperatively steer and manage the situation apart from the Tesla. The Tesla system does not allow the driver to deviate from the lane centering path and will disengage when a driver inputs steering torque."
I wonder if there is even a safe "in between" phase between normal driving and truly, fully self-driving vehicles. Or will many people put too much trust in the tech to make it a safer alternative until it's much more mature. Think of airplane pilots, where "autopilot" is also not really a fully self-flying/landing/takeoff system: they have tons of specialized training to hammer home the limitations and work safely within the system. There's even a body of research literature on the topic of when too much automation makes pilots too complacent, and decades of engineering with that problem in mind.
It all just makes me think that Waymo may have it right in not targeting any type of transitional state between traditional and full automation.
In aviation, an autopilot follows a fixed path and makes no attempt to avoid other aircraft or terrain.
There is no such thing as an automated takeoff. Automated landings, in practice, require the pilot to hand-fly the last part of the approach (except at a fleetingly small number of airports with cat IIIc approaches, where a ground navaid, not GPS, provides a high precision signal to guide the autopilot).
State of the art automation of aircraft speed consists of the pilot dialing a different speed to the autothrottle, which is effectively a traditional, non-adaptive cruise control.
the prevailing opinion on this forum seems to be that these semi-autonomous driver aids are inherently unsafe because the driver inevitably stops paying attention. but are they actually worse that a typical human driving manually? I know we've seen a bunch of high-profile news stories about tesla autopilot malfunctions and inattentive safety drivers, but is there any real data to suggest that these systems are as bad as portrayed?
I really don't get how this works. To me all this sounds like completely
ridiculous. Yet, some of those claims appear in actual publications, in high
profile journals. Otherwise, people fall over each other to buy that hot AI
stuff. I mean, what the hell?
Perhaps we must persist in educating people about the abilities, and
limitations, of current technology?
 And nothing else. Not even tick-tack-toe. It has a model of a board with
squares and of pieces making chess-like moves, so it can't deal with anything
At level 3 humans aren't driving the car but are suppose to pay attention so that they can take over if a problem arises. In general humans are not good at that task. Some of them will fail at it, the car will crash, people will get hurt/die, and maybe in this "safety first" world, automatic driving gets a bad name and is "set back" a decade.
Elon Musk has stated that he believes more in a "safety third" kind of philosophy. In an interview I saw he stated that with his kids he tells them that "don't panic" is a good first principle for dealing with life and "safety" would probably go third. It's important but not the prime directive of life. What is second is still up for consideration.
So the love/hate for Tesla's Autopilot strategy probably comes down to how you feel about risking people's lives to possibly save many more. Here is a question to ask yourself. If Tesla's autocar strategy causes 100 deaths of people who volunteer to take the risk before really good automation is developed, but Tesla's courage/recklessness has a 50% chance of saving a million (choose your own number) lives due to the speed-up of the implementation of self driving cars, would you support that? I would and so would many others, but many would not. This is a deep difference in values and that is why I think there is so much love/hate around the issue.
Maybe Tesla wants to call it Autopilot because they hope they can incrementally improve it so that someday they get to level 4/5 and won't have the need to change the name at some arbitrary place along the way to full autopilot competency. Cruise control was a bit of a misnomer also but people seem to be fine with the word at this point.
Noone has level 3 autonomy, yet.
Level 2 – Partial automation
The driver must be able to control the vehicle if corrections are needed, but the driver is no longer in control of the speed and steering of the vehicle. Parking assistance is an example of a system that falls into this category along with Tesla's autopilot feature. A system that falls into this category is the DISTRONIC PLUS system created by Mercedes-Benz. It is important to note the driver must not be distracted in Level 0 to Level 2 modes.
Waymo not level 3? Seems they claim to be level 4 for their Phoenix rollout.
However, that's just preposterous. It's a bit as if Boeing claimed their planes have anti-gravity because they're just better planes (I know next to nothing about planes- Boeing's name chosen completely at random). Waymo are not particularly farther than anyone else on the path to fully autonomy- they're just doing the smart thing and not trying to drive where their system can't, or leaving it to the public to test their limits.
The following article, in The Conversation, has a better overview:
And here's the SAE document the five levels are based on (actually from the wikipedia article's sources):
It's easy to end up splitting hairs though because level 2 and 3 are not very clearly defined, and anyway, these are very vague definitions without a hard test to check them.
The vagueness allows tech companies to claim level 3 automation, but in that they're trying to push the boundaries of marketing- not of the technology.
Tesla ADS is the best on the market but it is marketed as an autopilot, which is wrong and caused some overconfident owners to die.
They might have been overconfident, but if the misleading marketing leads to deaths, I don't think the finger should be pointed to "overconfidence".
It's a TL;DR. Of course it sacrifices nuance for brevity.
>They might have been overconfident, but if the misleading marketing leads to deaths, I don't think the finger should be pointed to "overconfidence".
I generally agree with you but there is a good argument to be had that the driver is responsible. These were personally owned vehicles, not rentals. It's not like the owners who died didn't have time to become familiar enough with the system to reach the same performance conclusions (i.e. "it's basically just really good lane keeping and adaptive cruise") as Euro NCAP did. I get that products for the general consumer are supposed to be stupid proof but everyone knows that marketing bends the truth by only ever showing the ideal case. Nobody expects fancy AWD and electronic nannies to let them navigate snow covered hairpin roads with 100% accuracy every time like in the car commercials. Nobody expects to rely on automatic braking or cross traffic detection to not get in accidents when backing out of a driveway. The same goes for Tesla's autopilot. It is unreasonable to expect Autopilot to perform the way the marketing implies in 100% of circumstances.
This means if a Tesla kills me in the first 2 weeks I am not responsible since as per your argument I did not had enough time to realize the marketing deception and the actual capabilities of the car, it is not like I was trained like real pilots to drive this specific car model.
Is Tesla PR so 'well connected' to astroturf basically everywhere or did their marketing succeeded in creating an army of people who emotionally associate Tesla with the future of humanity? It's just a car folks, take a chill pill.
Similar things happen in Apple threads, sports threads (elsewhere), and the games console wars. People just don't like their chosen sacred cow getting criticized.
For example, part of the argument over the feature being termed autopilot has been that it supports misleading statements from salespeople—the least "semantic" issue with the name I am aware of.
A feature designed to assist in life-or-death situations shouldn't be named in a Darwinian way, harming those who assume more than should be assumed.