While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.
There are very interesting HCI challenges around properly communicating to the pilot/driver "what the heck it is doing" and clearly communicating just how much control the human has or doesn't have at any given point.
This "announcement" certainly doesn't inspire any confidence that they have really thought this through deeply enough (I think they probably have, but it should be communicated like it). As a huge Tesla fan, I can't help but feel like I need to hold my breath now and make sure something terrible doesn't happen because of this, and it ends up leading to even more regs setting us back on the path to full car automation.
It sounds like it's time to introduce the dog?
Example Air France 447, where (presumably) the airspeed sensors where blocked by ice, which lead to the autopilot to disengage. There is also the theory that the pilots then made some of their mistakes based on the belief that the avionics would stop them from bringing the plane in an unsafe state and didn't realize that the system wasn't able to do so with missing information. (I hope I'm presenting this correctly, but that's what I remember reading afterwards)
There is also the training aspect: if a system takes care of something 99,9% of the time, the pilot is less experienced in the 0,1% where it doesn't. There is a reason the safest airline pilots often fly way smaller aircraft as a hobby and get some instinct for manual flying there.
I'm not an airliner pilot, but I have the impression almost everyone is always grossly underestimating how often a human pilot is still required to safely fly a commercial airliner. Yes, the autopilot may be able to handle 99% of the flights without intervention, or even 99.9% of the flights, but simple math shows that that means there are still tens if not hundreds of thousands of flights every day where the pilot needs to take action, and that any individual pilot will get into a situation where the autopilot will fail to handle a situation probably once or twice a year. Likely a lot more often.
Large parts of the world have weather conditions, diversions, or airports with spotty or unreliable ground systems by the way, in those areas the need for a human pilot will be several times higher than once every hundred flights. There's ample examples of plane crashes that were caused by automated systems, rather than pilot error (I don't remember the exact location and date, but I know that e.g. not very long ago a plane crashed because the ILS beacon at the airport was malfunctioning). In other cases the level of automation enabled lousy pilots with bad training to fly the airplane (case in point: the Air Asiana crash at SFO), which IMO is not 'pilot error' because these people should not have been flying the plane in the first place.
All this makes me think the solution to improve airliner safety even further is not more automation, but better pilot training.
As for autonomous driving, besides AI assisted highway cruising, I don't believe in the concept at all, and my bet is that in 25 years we've realized that we've been wasting most of the time trying to build fully autonomous vehicles. Limiting the possibilities of driver error seems like a much better investment (e.g. automatic cruise control, automatic emergency braking, etc).
There's a lot to be said for that. The pilots didn't know their altitude and airspeed either. They thought they were overspeeding when they were stalling.
Some military aircraft, mostly the stealth ones with terrible aerodynamic stability, need that sensor data to stay in the air at all. If they lose the sensor data or the stability control system, the only option is to eject. So they tend to go in for more sensor redundancy. There's certainly no reason that large transport aircraft can't have more sources of basic attitude/altitude/airspeed info.
They needed to descend to gain airspeed and lift. But they were pulling up and literally falling out of the sky without realizing it.
That's what you're doing here. Saying "the copilots crashed it" teaches us nothing; we need to know why they crashed it, what cues they misunderstood and what skills they lacked, so we can keep it from happening again.
I think the design language will evolve slowly with the users at a speed roughly related to the adoption curve.
It may tell users everything that the car is deciding to do now, and as confidence in the system increases with adoption, it will do less and less.
That is why more advance self-driving car researchers are working on the harder problems of getting abstract and intelligent interaction with a car working. The real market problem is just telling you car "take me to the store" rather than just getting it to drive down a straight lane.
If the bad guys had only hacked Bond in this scene, it could have been much different!
If % of accidents caused by pilot interference on a working system > % of accidents caused by system malfunctioning and pilot ignoring it: people will still be against not allowing pilots to interfere. Even when it causes more accidents...
There's something about humans trusting humans more than machines that I don't fully understand. Systems can make mistakes but the amount of human mistakes is often exponentially greater to a degree of absurdity that humans are even trusted at all and yet people will side with the human over the machine.
Humans will always want human oversight - even when that oversight does more harm than good once automation reaches a certain threshold...
Special note: I'm not aware of avionics and the data on pilot interference w/ the system vs failure of pilot to intervene. So maybe this example doesn't hold very well for avionics...
Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.
Unless ALL cars are automated, it is a must that a driver be able to take over quickly.
> A human can make judgement calls in unexpected situations.
A properly programmed machine can behave smarter and faster and it also knows its limitations, so it can account for them. "Judgement calls" is something not needed because the machine always keeps good judgement.
> Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).
Very much depending on programming and sensors, but from some point on, I'll always be betting on programming and sensors.
> Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.
The computer will track all vehicles, their velocities, past observed movement history, perceive road conditions that you can't due to limitations of Mark I Eyeballs, and will push all that data through an inference system that is capable to stay rational all the time, without being affected by "stress" or "surprise". Solutions will be computed literally faster than the blink of an eye. The machine will see the phase space of the road-cars system and be able to navigate it to safety.
> Unless ALL cars are automated, it is a must that a driver be able to take over quickly.
The driver needs to never take over. A dog should be placed, trained to bite the driver if he tries to touch the controls. Better yet, remove the controls.
The machine knows its limitations. The right way to avoid an accident is not to perform stunts based on intuition, it's to keep the entire system (of cars and the road) in a stable and known state, and navigate that state to safety.
People seem to get machines really wrong. Machines today are limited in their creativity. But they are orders of magnitude better than humans in getting things right. On the road, we need more of the latter than the former.
You are correct that in the vast majority of situations a computer will outperform a human. I've avoided accidents by luck more than judgement, guessing that the car will be able to do what I'm asking (whilst the autopilot would know at all times and react in milliseconds).
All the world needs to accept is that at some point someone will die because of an auto pilot error and that's ok because it's net lower numbers of deaths than the same number of humans driving.
That's where I see the real sticking point for automation. Driving isn't about getting from A to B, nor is safety the top priority. If it were, there wouldn't be such a thing as a V-8 (or whatever the electric equivalent is). I find it very ironic that a performance car like the Tesla might promote "the path to full car automation". You think the gun debate is tricky? Try telling people they cannot do 51 in a 50 anymore.
Yeah, this is a huge problem with drivers today. 99% of driving is absolutely about getting from A to B in as safe way as possible; anything else is reckless endangering of the lives of others. But people grow up dreaming about fast cars and freedom and adrenaline, then they get their licenses and can't confront the boring reality. That's the very reason we need self-driving cars on public roads. Driving for fun needs to be separated out and put somewhere else.
How about sightseeing buses? Or bicyclists riding purely to stay in shape? Or people learning how to drive? Or cops paroling neighborhoods? Or ice cream vans? There are lots of perfectly reasonable uses of roads that have little to do with A-to-B. The elimination of "fun" in the name of safety is a very uphill battle.
> So my friends and I riding motocycles through the mountains on a sunny afternoon are reckless endangerment?
If you're breaking the law then yes, you're reckless, period. If not, then enjoy it all you want.
> How about sightseeing buses?
They're driving A-B-C-D in a sightseeing-optimal way within the limits of law and safety.
> Or bicyclists riding purely to stay in shape?
They aren't doing it next to cars, are they? There are dedicated areas for riding bikes for sports.
> Or cops paroling neighborhoods?
A-to-B driving, with lots of As and Bs. Also, there are two of them, one is doing the driving, the other is doing the patrolling.
> Or ice cream vans?
A-to-B, with lots of As and Bs.
Public roads are not the place for having fun. You can enjoy driving on them however much you want as long as it doesn't affect the safety of others.
I myself spend more hours on public roads on a bicycle than driving a car.
But pure sports? There are parks for that.
Don't get me wrong, I'm all for reclaiming cities for pedestrian and bike travel and creating a maximally energy-efficient public transport system (ironically, the best idea would probably be self-driving, publicly-owned electric cars forming a PRT network). But in current situation one has to stay pragmatic.
We should be promoting cycling not making it even more difficult. Self driving cars can make cyclist significantly safer for everyone. The most dangerous thing on the roads right now is human controlled cars.
I love to go on long bike rides. Like 4 or 10 hours long. Parks is not an option for that. While backroads are awesome. Once out of town, I usually meet a car once in a while. And while in town and suburbia, I guess I can qualify as A to B commuter. Even out of town, sometimes it's sort of A to B commute, because I want to go to some sightseeing spot or fancy cafe in the woods or smth. I just happen to cycle instead of drive. Or sometimes I dispatch my family in a car and go by bicycle myself. It's still A-to-B, isn't it?
Saying what is and what is not a "proper road use" is a very slippery slope. Is it OK to drive to get to a grocery store? Is it OK to drive to some spot in the middle of nowhere for sightseeing? Is it OK to drive for few hours to bbq in an unseen place? Is it OK to drive to go fishing in a nicer lake instead of the one nearby? Is it OK to drive to see friends if you can video call them instead?
Generally such laws include things like:
1) May use the turn lanes when turning.
2) Must ride to a specific side of the road unless: You can keep with the speed of traffic, there is not enough safe distance for a car to pass you, or it is not safe to ride on the side of the road. (These rules are generally true of any vehicle that drives slower than general traffic.)
3) Protections to the bicyclist. Either in the form of "vulnerable traffic" laws, or specific bike protection laws.
4) There are even places with laws preventing bicyclists from using the sidewalk.
5) About the only place a bicyclist isn't allowed to ride is on high speed traffic areas like a freeway/highway/etc.
6) Protections for the bicyclist against negligent drivers such as mandating safe driving distances, when to pass, and protecting against specific activities that may endanger the bicyclist.
I do agree that a bike path is certainly much safer (and perhaps more scenic) for fitness, but many people are interested in traveling to a specific location. (Ex, the beach or a park. Riding along the coastline. Etc.)
The main problem with biking on the road isn't that you are doing something wrong... but that drivers do not respect you. Tailing too close, not giving you the right of way when you have it, etc.
And I wonder why you assume that a bunch of motorcycles driving through the mountains would be breaking any laws. I know a group of lesbian harley riders who never go faster than 50 who would take issue with any assumption that riding a bike suggests illegality. For them it is about being seen and showing pride. I did a ride for burn victims a few years back. The kid riding on my bike was all smiles even though I doubt we broke 30 the entire way. He just liked the wind, a bit like my dog when she hangs her head out the window. Riding for fun does not mean riding for speed.
What I mean is - going on a motorcycle trip on a scenic route? Sure, it's fine (a self-driving motorbike would probably be even safer though :)). The problem really starts when some people put fun in front of safety and e.g. start speeding.
TL;DR: have as much fun as you want, but not at the expense of safety. Want to have additional fun coming from doing dangerous things? Go do it somewhere where you endanger only yourself (and those who consciously decided to participate in such activities). Racetrack, private roads, whatever.
Any sudden, unexpected, movement by an autodrive motorcycle would probably see the drive thrown free, or at least result in a weight shift great enough to down the entire package.
Because all those "ballet-like" things is basic feedback control issues, the kind of which you learn about on control theory 101 in college. You can pretty much convert the problem of steering the self-driving motorcycle into the problem of self-driving car by adding a module that accepts car-like inputs and translates them to dynamic balance control. We've solved the basics with segways, which are smart-high-schooler level electronics projects.
Also, generally, whatever you can do on your vehicle a machine can do the same using the same control inputs, only better. Manual gear shift included.
This all comes to a head at the transition point of countersteering, where the controls reverse around 20ish mph. The rider's balance controls whether turning the handlebars right means right or left. If the rider isn't in sync with the autodrive, everyone is in the ditch.
(That experience taught me to always tell people to lean into the curve if they've never sat on a motorcycle before)
Given the close interaction between the rider and the bike, I'm not sure how comfortable would such a vehicle be. It may turn out to be nausea-inducing.
The speed limit (and various other laws) are very much all or nothing. Riddle me this, which of the following is safer to do on the same road with a speed limit of 50:
a) Driving a brand new porche with brand new tyres on a sunny day, doing 55
b) Driving a car from the 70s with tyres that are almost worn out (but still in the legal range), at night, in heavy rain, doing 48
now consider that a) is illegal and b) is legal.
Breaking a law should not be a binary decision, making it a 0/1 choice makes the world worse for everyone, because to be completely safe, the restriction has to account for the worst case and that means that the vast majority will be artificially limited and feel like the law is needlessly overreaching. This is not a good situation to be in.
That doesn't mesh with areas where robots have already taken over. The robots moving freight around shipping terminals don't move any faster than humans driving trucks. The robots I see delivering drugs in hospitals don't move faster than the nurses.
Speed limits aren't just about the cars. Noise and pedestrian safety are big factors. There is no technology yet that can allow a robot to see around blind corners. They may have a 1/4second faster reaction time, but that won't mean they could be trusted move much faster towards a crosswalk that might see a bike cross at any second.
I guess it boils down to perspective. The hospital robot is in no hurry. His life is no better or worse if he is faster or slower. But a human inside a robot-driven car does care. He doesn't see the overall traffic efficiencies, and even if he did he wouldn't take much notice. All he cares about is getting to the destination asap. So it is a different speed decision than the robot.
That's one reason. Then there's the money printing aspect of it and the environmental aspect of it.
But they're inconvenient for getting from point A to point B. Which is why they've been replaced by cars.
I suspect the same will happen with cars. I love my fast cars, but apart from that once a month weekend drive, I really don't get to flex its muscles.
I'd much rather relax in my car on my way to work every day. Then I could buy a more powerful car which I could take out to the race track on weekends.
Particularly when you can work in your car. Or watch a movie in your car. Or have sex.
But joking aside thats where these systems will eventually take us.
And compared to that, how many accidents were prevented by the use of autopilot? How many times did the autopilot make a better decision than a human would have? You can't compare to an absolutely perfect state (zero accidents). You have to compare to the current situation.
I was only remarking on the possibility that some of the issues that this automation have now created in aircrafts might also be seen in cars, especially when there are things like still keeping your hands on the steering wheel despite the fact that the car is in control.
And, unlike your rational analysis, I don't trust politicians or the media to think about it like that. Instead, I'm sure the first accident that could even be remotely attributable to an automated system would immediately ignite a firestorm of bull-shit articles and possibly damaging regulation.
They're typically called Lane Keeping Assistant, Adaptive Cruise Control, Blindspot Warning, Automated Parking, Traffic Sign Recognition, etc.
The emergency steering bit is interesting, though no further details are provided, as it requires the car to ensure that there is a safe space to steer into, which is dicey for a forward collision emergency braking system, so I'd conjecture it is connected to the side collision warning, and allows collision avoidance if there is enough space in the current lane.
You could make the whole brand argument, like with Apple releasing the iPad, but....I simply disagree. Tesla deserves the credit they're getting.
He doesn't have 100 years of history weighing him down; he hasn't made a lemon yet; his cars are actually pretty awesome.
But... all these features do already exist in other brands.
Disclaimer: I work for GM
The second, and likely more critical differentiator (IMO), is security. Mr. Musk would not accept public shaming of such magnitude like this:
And I only pick on GM here since, well, you work there. When it comes to software in cars Tesla treats it as a true part of the vehicle engineering. Others seems to still treat it as an afterthought - and then the question starts to linger: how good is the software in all of the other vendors "features"? How much QA have they done with regard to lane keep, blind spot, adaptive cruise, etc?
I don't own a Tesla, I really wish I did, but if I had to place a wager on a car manufacturers QA process and ability to build fault tolerant vehicle systems I would pick Tesla to oust the competition handily at this point. While I realize it's a subjective matter, and there's really no good way to compare, the directive of the company seems pointed in a more apt approach than others.
The quote: "Model S is designed to keep getting better over time" reminds me of kaizen (1). It is a really awesome concept that not only can the car manufacturing process and technology keep getting better, but the cars can improve as well.
That being said: OTA update is REALLY FUCKING SCARY for cars. What if someone puts the wrong update in the queue accidentally? (2)
The historic attitude to modules in cars is also important. Modules run "code", but it is treated as mechanically as possible. Could you imagine changing out a few pistons while driving? Probably not. This is a failure of imagination that is being addressed now!
It is a truism that Big, Old, Large companies are risk-averse. The downside to doing a bad OTA to a car is unlimited!
WRT to Tesla's QA - I don't know of anything that has been published in this regard. I would hope they are "doing it right". I hope they are successful, and I hope everyone is inspired by their leadership and learns from them.
1: https://en.wikipedia.org/wiki/Kaizen - Article claims it was introduced by American business people, but Japanese companies continuously improved it =D
OTA update is no more or less scary than any other form of software update, or in fact any other form of mechanical update.
Software engineers are generally used to the level of rigour that goes on with their software. If you're a web devloper shipping a commerce application there's an appropriate level of testing and process, because there's only a certian level of reliability you need to hit, and spending more money on that would slow down your development. The way you go about delivering software for medical devices for instance (which we do), is a completely different process with a whole different level of rigour, testing and documentation. Because that's appropriate in that environment.
There's a whole lot more documentation and thought that goes into the beam that stops the top of your house from falling down, than goes into the beam that stops your garden shed from falling down. It's no different than software.
At the same time, consumer electronics are routinely broken by OTA updates.
Cars fall squarely in the middle, high volume and high price. Additionally, failures carry a high risk. Nobody will die if your webshop goes down, but if your car decides to steer into oncoming traffic, well, bummer.
The support beam analogy is flawed in the sense that the beams are simply made bigger to ensure they're strong enough even with considerable material defects, but this doesn't work for software, where a single little bug can lead to a catastrophic failure.
I am not aware of anything other than cars where such a high number of devices carries such a high risk factor. Certainly doing OTA car updates in a commercial environment is possible, but there is not yet a relatively foolproof way to do it.
Regarding OTA: It is. But, ignorance is even scarier. Look at any company that embraces CI (continuous improvement). Amazon - how many changes to production do they push a day? Now compare that to a legacy F50 that has process designed to be change averse as they view that as risk in and of itself. This seems to be the viewpoint you're working from through GM and maybe (I'm speculating here) you're influenced by the process internally. Maybe your view is that it's risky because of what you're exposed to? My guess is that given the culture of Telsa - critical software OTA is not taken lightly. Speculation - but they likely have a far more rigorous process for deployment than many others since they've done this from the beginning. If I can suggest reading on this subject I would point you in the direction of Gene Kim's work in this area as he has studied high performing organizations and, in a nutshell, has found that companies that embrace change and do it frequently have less operational problems than those who don't. Risk-averse seems to compound mistakes and, this could very well flow into those hypothesis around practice. The more you do, the better you are and "10k" hours.
While I agree accidents can (and will) happen - again I wouldn't put my money that it happens to Tesla first or with any cadence of frequency. Keep in mind Tesla is riding on quality in software - full stop. If they have a problem there, it could be detrimental to the point of failure. This is a good thing for consumers because they're most likely getting a superior product comparatively. And we already know that non-OTA software that has gone out the door in vehicles has caused death and harm. Is it scary that those bits are note able to benefit from a timely OTA update? There are definitely two sides to this coin.
And finally... While I know that it's been said quite a bit that "we've done that". I'm not truly sure people are grasping the reality of what Tesla is doing. While I understand others have these features, the Jalopnik short sums up what they've accomplished that, in my opinion, others are definitely lagging behind in - if you haven't watched it definitely do:
Again... Even if others are kind-of-sort-of doing this today, the iterations will be across model years. No vendors have the long-term upgradability that Tesla does at this point. Not sure I would trust an American car to change lanes on it's own based on it's situational awareness as shown in the video.
This reckless Infiniti driver even gets into the passenger's seat while the car is driving:
The Tesla can change lanes on its own while the others can't, but the others have some interesting gadgets the Tesla doesn't - for example, Audi has the super trick night vision display which also picks out and highlights/alerts on pedestrians and animals.
But I looked exactly like the guy in the video when I first test-drove my car two years ago, but now I'm so used to it I don't think about it any longer.
But you gotta give it to Tesla's marketing department, that they can get people excited by a feature that you could get in a damn nice car, two years ago, at half the price.
It only requires you to grab the steering wheel if you want to change lanes or turn.
It's usually ok to just nudge the wheel a bit to let it know you're still there.
Note that this is for obvious liability reasons, not technical reasons.
Note that the Tesla reaction video is as low speeds in stop & go traffic, and my car is just as good there.
It would be interesting to test the Tesla in the same conditions as the Hyundai is failing in.
I'm quite envious of the OTA update, there's no way Mercedes will every upgrade the Distronic software in my car, if I want the improved version, I have to buy a newer model. :-/
Ping pong back and forth in the lane, no curve steering. Bleh.
I've had a bit of cognitive dissonance recently while reading Elon Musk's comments on Apple's electric car project because I have to remind myself Tesla isn't owned by Apple. Tesla does do some great engineering, but I think they have also Apple levels of hype within technology circles.
If they played it the way Tesla did, they probably would. Recall that nobody buying Model S knew about those capabilities up until the moment Tesla announced, "by the way, at some point we've started packing Model S with sensors; if you've bought recently then you have self-driving capabilities that we'll enable soon with a software update". This came as a surprise for everyone.
First-mover isn't all that matters.
I've driven many cars with Lane Keep Assist - it is really a warning/minor correction system to keep you from crossing over your lane. If you let go of the wheel, most systems will ping pong from lane to lane.
Tesla's system actually centers you in the lane, steers around corners, and handles changing lanes. That is the innovation.
I am unable to find any info on the Range Rover site about capability to auto steer (just Auto Cruise Control). There are also no Youtube videos showing this functionality.
Can you provide more details?
For those with more knowledge about cars, how does the sensor array in the Model S compare with similar models from companies such as BMW, Audi, Mercedes-Benz? I'm interested in knowing if it's software or the already installed hardware holding back recent luxury cars from similar capabilities.
Also, does anyone know anything about the (digital) security features of the Tesla? This announcement from Tesla makes it clear that the actual control of the vehicle can be modified by an over the air software update. With the recent Jeep hack in mind, does any know if something similar is possible on a Tesla, or if there are some safeguards such as signed updates? As one of the most computerized cars on the market, I tend to think that the Tesla cars might also be some of the most (maliciously) hackable cars on the market.
Car makers (and suppliers) have to learn how to make software to make optimal use of the existing hardware, but they've got a long way to go, still. Every major player nowadays has a research center in Palo Alto or whereabouts, seemingly trying to learn how to do this via osmosis, but it will take some time until they really understand how to keep pace with information technology, and how to bring it into their mammuth manufacturing and legal frameworks. Nevermind the necessary mindset to pull it off.
This stuff is coming fast...
I'd love to learn more about the increasing "computerization" of cars. Do you know of any good publications, blogs, etc for someone to learn about the computer systems powering cars?
Maybe they expect drivers to treat it like beta software - "Please don't use these features in production cars. Make sure you keep backups of all drivers and passengers in case of bugs."
That spells 'not ready for production and release on public roads'.
Compare it to the regular cruise control, already present in production for years. Would you say cruise control is not ready for production or release on public roads? It's a tiny subset of what Tesla's update does. (and it doesn't even tell you when you need to take over)
If you're required to be ready to intervene that's about the worst possible way to introduce automated driving. And more to the point: the better the implementation the longer between 'interventions' the more likely that such an intervention will not be useful at all.
Which sense of "must" is used here? The car seems to play an unwinnable game with the driver: keep your hands on the wheel or I'll...what? Disengage autosteer and perhaps crash? With no enforcement mechanism, drivers are incentivized to "abuse" (aka "use") the system as much as it allows.
It's like having to hold a button your hands free headset in order to be able to talk :-)
"The assumption that humans can be a reliable backup for the system was a total fallacy." "It made us change our whole path."
That means that practical self-driving cars are an all-or-nothing affair.
In all seriousness, this system to monitor driver attentiveness will generate lots of very discoverable evidence.
Personally, I'd never use something like Autosteer. As far as I'm concerned, either I'm driving the car (i.e., directing its movement, even if that movement is realized by computers/microcontrollers), or a computer is driving it - not something in between or both.
How different is AutoSteer from regual cruise control, in this regard? Or do you think that this level of automation might encourage people to distract themselves, without having quite enough technology to allow them to do that?
Take away the need to steer, and the only thing drifting will be drivers' attention.
Essentially you just need to hold your hands on the wheel and relax your arms. Before you've driven with this feature you might not realize but all the time when you are driving you are constantly doing minor adjustments to the steering wheel which takes quite much effort, both attention and physically, this takes away all that but still keeps your brain in the "i am in control of the car"-mode. If you were to keep your hands off i think it's easy to zone out and if something happens there might be too much context switch for you to handle the situation fast enough.
Is there any video of the tesla doing this at highway speed? I can only find city driving.
> Research that Stanford has done shows that drivers resuming control from Level 3 vehicles functioning in autonomous mode take 10 seconds just to attain the level of ability that a drunk driverpossesses. And to get back to full driving competence takes 60 seconds.
I'm not sure if they install the hardware sensors regardless & only control access via software, though, but I'd be curious about this for sure. Anyone know?
IIRC, when the Auto-Pilot hardware/option was originally released the "Tech Package" was a pre-req to buying the "Auto-Pilot" option.
For the autopilot features, the difference was only software. Even before the Autopilot option was first available to order in October 2014, cars were being silently shipped with Autopilot-ready hardware for several months. For those cars, it's possible to call up Tesla with a credit card and enable the feature over the phone.
These were moved to the Premium Interior and Lighting Package ($3000).
I look forward to the next step up from all the car makers, which is clearly the car driving on its own in a much more confident way, with the driver simply there to manage exceptions as opposed to being 'assisted' by technology as is with the current implementations.
> Auto Lane Change
> Changing lanes when Autosteer is engaged is simple: engage the turn signal and Model S will move itself to the adjacent lane when it’s safe to do so.
A single sentence! What's the point of having drivers license lessons and testing if the fundamental operation of the vehicle can change so drastically?
Am I being a luddite, or does anybody else feel this way?
Hah, implying that these are actually useful. The Washington state test and generic lesson plan doesn't even include highway driving. It's a 20 minute test that quite possibly anyone could pass; all I had to do was briefly drive around, and the hardest part was probably parallel parking with a couple of feet of leeway, and controlling your speed down a hill.
If it's ridiculously over-the-top easy to receive and keep a license, even when it shouldn't be, then I argue the use of said license, and the checks around it, are useless. All your check is is checking that the person can do the very bare minimum of what would be considered "driving", and, worse, it's really checking that the person was able to do it X years ago, where X can be as far back as even 50 or so years ago. At that point, what's the use of checking? You are right that we should be more rigorous in testing and validation, but when that's the case, then the whole system is put into question.
Same concept as putting code inside a function. If you can make changing lanes a functional process that works safely the same way 100% of the time via computer, why not?
Right now it is re-written and executed every single time a human performs the actions which results in errors and more.
I welcome our computer overlords.
In Europe, there are different classes of license for auto vs manual transmission vehicles.
I have no earthly rationale for why we ever legalized cruise control in the first place, but that's the status quo we are comparing against. Anything that makes cruise control safer is an improvement. Arguably without cruise control most of these innovations would need huge amounts of lobbying to pass, but they're actually pretty easy to sell in comparison to what people have today.
I drive entirely using cruise control to ensure compliance with speed limits.
Right now, IMO, we just need more self-driving cars in general to move the whole concept in terms of acceptability and availability. There is certainly lots of room for innovation, but for now just more product selection alone is valuable.
An example in this vein: imagine a firmware update to a wi-fi router to give it MIMO support. A MIMO antenna isn't any different than a regular antenna; the difference comes in the baseband firmware doing clever-er math to pull out overlapped signals, spacially model their sources, and modulate its own output so the signal will constructively interfere for best performance at the destination.
In a Cliffs-notes sense, both vehicles seem to offer an autonomous driving mode in normal conditions. Like any other competitors, they both have a few unique features that are neat but don't really change the overall experience exponentially.
To be clear, I own neither a Tesla or a Mercedes currently and I'm basing this just on looking over the specs, but they appear to be fundamentally comparable.
In addition the Mercedes' system can't change lines automatically.
Massachusetts has terrible road striping; it seems as though they get around to it about every four or five years, waiting until the lanes and ramp markings are beyond dangerous. This has been irritating me for years. And then they seem to use some kind of cheap paint that wears off quickly. Public works job security, I suppose.
But automated lane navigation will require clear markings. Hundreds of thousands of deaths later, we just might finally get a safer road system. Pathetic, but better late than never, I guess.
This seems odd, my understanding was that drivers needed to "check in" every so often, not handle the wheel at all times.
Patience young Padawan.
This machine will keep pace with traffic. OK. Does that mean it will break speed limits? Unless it is scanning for each and every potential road sign, it simply cannot be respond to arbitrary/temporary limits. The determination of the legal limit on a piece of road is a complex task. Road construction, local conditions, sunrise/set, time of year (school zones) and even weather can be a factor. And let us not forget "Speed limit X when children on road". You need some serious cpu time to work out whether that person walking along the road is a schoolgirl or a construction worker.
Imho any system not capable of determining the speed limit accurately is a legal liability. Have fun with the tickets.
>eliminating the need for drivers to worry about complex and difficult parking maneuvers.
No. Parallel parking is not a complex nor difficult maneuver. It is total beginner territory. No lives are at risk. With a decent bumper, even risk of property damage is minimal. Anyone not capable of learning to parallel park probably shouldn't be behind the wheel of much anything. Anyone buying this car to avoid such mundane tasks isn't someone with whom I want to share the road.
Now, I have no idea whether or not this feature is utilized for autopilot (though I would kind of assume it would be), but it is there :)
Road signs also have situational contexts. e.g. in the UK there are road signs indicating a max speed for an upcoming sharp bend (or series of bends) in the road. There isn't always a matching speed limit sign after the bend because drivers are assumed to work that out for themselves. But will the cars do so? Or will your self-driving car get stuck at the lower speed?
That also avoids the enforcement problems such as where the new limit should being and end, and the difficulty of measuring vehicle speed through a corner.
You (and parents) vastly overestimate difficulty. They are reflective, high contrast, using known font and very limited symbols (0-9). Car CV systems can distinguish pedestrians from background, they can read a freaking sign.
Sub signs I'll give you. Don't have any where I live though.
Compared to all the other things it must do reading signs is on the easy list.
The MobileEye system in the Tesla (and other manufacturers) has some of the most sophisticated CV software on the road - not just scanning for road signs, but identifying road markings, lane markings, curbs, obstructions, traffic signals, etc.
Tesla is most likely using their machine vision system to identify speed limit signs, and then incorporate this data into their mapping data.
For speed limits, the obvious technical problem is knowing when they end, if they apply to your road, or maybe a parallel road or an off-ramp, which is mostly easy for a human to tell, but much harder for a camera.
Even though the camera may be better on average processing all this information than the average human driver, it's an unanswered question, from a legal point of view, who's responsible when the camera is wrong.
Volvo has recently taking a stand proclaiming legal responsibility, but it remains to be seen if that's even a possibility in many nations.
There may be some jurisdictions where this is the case, but in most I'm familiar with it is fairly well settled that the human driver of a car is legally at fault if the car is driven in violation of the law, and the human driver of the car is also legally responsible for assuring that all mechanical features of the car are maintained so as to not interfere with the human drivers ability to assure that the car is driven in accordance with the law.
(In many jurisdictions, the manufacturer may have liability for accidents and injury due to manufacturing defects, but that doesn't generally absolve the driver for being responsible for driving consistently with the law.)
It is very possible to get a speeding ticket well below the legal limit. It isn't common, but where fog/snow/ice/rain are a factor I have seen cops hand them out to idiots. And 'conditions' can include the condition of your vehicle. Driving with bald/track tires in the rain can result in an 'unsafe speed' ticket should a cop see you slide.
"Hey, is it a school day?"
"No, we take a couple days off before exams."
"Oh. It's a good thing then that we haven't ticketed anyone."
"But doesn't the law say 'normal school day'? It is a friday."
"Ya, but your school ain't normal."
You could say that the ultimate responsiblity lies with the driver, but taking this position effectively means that all self-driving features are pointless (since this implies that the driver should be concentrating on the driving at all times).
If the responsibility lies (partially, at least) with the car manufacturer, they are going to get a lot of lawsuits thrown at them when it gets the speed limit wrong. Every speeding ticket, every accident where the car was going too fast, etc.
If the highway speed limit is 55 and everybody is going 70, you'll make the road a more dangerous place by going 55. I don't know how it works from a legal standpoint, but from a practicality and safety standpoint, I'd rather go with the flow.
Since you can't choose who you're on the road with, doesn't that make this feature more appealing?
You seem to be implying that most drivers are past the beginner level of skill in driving. I have not found this to be the case.
FYI, most every manufacturer of moving vehicles set their speedometers slightly high. On motorcycles it can be as much as 10%. This is to avoid any accusation that any inaccuracies in their product (ie changing tire diameter) might result in someone going faster than indicated.
IIRC, you set cruise control at the speed you'd like to go. It'll try and keep that speed where safe, but if traffic is moving slower it'll slow to match. It won't try and follow someone doing 90 if you put cruise to 65.