Nobody else is even close.
This problem is slowly being solved, by normal engineering practices. The "fake it til you make it" players are being left behind. Uber has been shown to be totally incompetent. Tesla really just has a good lane follower and an mediocre car follower. Apple has been trying to hand-wave by talking about "significant disconnects" vs. the all disconnects DMV requires them to report.
The LIDAR industry is struggling, but there is progress. Quanergy seems to have been mostly hype. Continental, the big European auto parts maker, bought Advanced Scientific Concepts, which makes and sells a good but expensive flash LIDAR used in DoD and space applications. They packaged it up for automotive use, and are waiting for the self driving industry to catch up. That technology uses exotic indium-gallium-arsenide sensor ICs, which are expensive in small quantities but would probably be affordable if they could sell a few million a year.
This looks like a problem that's being solved. Just not fast enough for startups used to quick payoffs in software.
I am skeptical of that, because I expect that even if disconnects are ultra-rare overall, and the testing is representative of normal usage, there will be enough people that frequently use the technology in ways far off on the tail of the distribution that for them, disconnects will be much less rare. And that might create a PR blowup or worse.
I think what is worrying me may be something called "heteroscedasticity".
But fully self driving cars only have two inputs: starting point and destination. Doesn't matter whether you are old, sleeping, 13 or a dog. Unless some roads are more susceptible to accidents than others, or manufacturers let people also fiddle with how much a car should risk accidents in order to get a little decrease in travel time, I don't see any way possible. And if manufacturers really do give that option, it's truly their fault.
You have to be very old for this to be true.
It turns out that 16-25 years old is worse than any group until something like 75-80 years old.
I feel like it's the opposite... When I drive the speed limit, other drivers around me become more aggressive. Even if they have to exit the highway 20 seconds later they fiercely fight for tiny speed differences of 10km/h, start flashing their headlights, etc.
If you are driving during rush hour where everyone is stressful, then idiots will flash headlights, do risky behaviour, etc, that's true. And yeah, driving slightly over the speed limit to be consistent with the traffic around you is generally a good idea. But if they are too much over the limit, it's their doing, not mine, and if the risky behaviour causes an accident in which I'm involved, it won't be my fault if I'm following the rules. Conversely, no judge will accept it as an argument if you speed because of the idiots and you cause an accident.
I personally just let the idiots be idiots. Especially when the limit has a cause, e.g. a construction site or something: they put a lower speed limit for the safety of the construction workers. I won't compromise their safety, not a bit.
But on (sub)urban streets driving the speed limit decreases the chance of causing serious injury to other road users, including pedestrians, even if it may lead to an increase in the total number of crashes between motorists.
(Note that while I work at Google, I don't have any particularly relevant inside knowledge. What I'm saying is consistent with their current pilot in Phoenix and public articles.)
By chance do you happen to live somewhere without icy roads?
But I too am hopeful that machines are better than human drivers in all scenarios.
Well, guess what, those guys are still on Reddit making snarky comments and the Dropbox guys have IPO'd.
But fair play for playing with the analogy. Funny.
Yes, people who can't live in pleasant places will not get some tech early. C'est la vie. We'll get there.
If Uber and Tesla continue with their current plans, they will have enough accidents that self-driving cars being in accidents will cease being headline news. They just become an actuarial math problem, just like regular car insurance is today.
(This is my opinion and not that of my employer.)
Maybe humans shouldn't be making left turns in traffic or passing trucks either. Those are probably, statistically, the most dangerous things you can do when driving. But we do them all the time, because we have no idea how dangerous they really are.
Currently, they can’t do any of that better than humans, and probably not for several years.
I strongly suspect that the quest for a low disengagement number is what got that woman in Arizona killed by Uber. These reports really should be sealed from the public. AV startups seem to be using their disengagement rate as a way raise money. They are aided by journalists hungry to make sense of a highly secretive field publishing these figures completely out of context.
The Uber self-driving program was trying to move too fast, before the software was ready. That and the operations failure made it inevitable that someone was going to be killed.
Yeah, the public has no right to know what vehicles on public roads are capable (or incapable of).
And we wouldn't want pesky innocent deaths to impede with Uber's ability to "bring self driving to market"...
This report covers disengagements following the California DMV definition, which means “a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” Section 227.46 of Article 3.7 (Autonomous Vehicles) of Title 13, Division 1, Chapter 1, California Code of Regulations.
Waymo has developed a robust process to collect, analyze and evaluate disengages for this report. We set disengagement thresholds conservatively for our public road testing. The vast majority of disengagements are not related to safety. Our test drivers routinely transition into and out of autonomous mode many times throughout the day, and the self-driving vehicle’s computer hands over control to the driver in many situations that do not involve a failure of the autonomous technology and do not require an immediate takeover of control by the driver.
To help evaluate the safety significance of disengagements, Waymo employs a powerful simulator program. In Waymo’s simulation, our team can “replay” each incident and predict the behavior of our self-driving car if the driver had not taken control of it, as well as the behavior and positions of other road users in the vicinity (such as pedestrians, cyclists, and other vehicles). Our engineers use this data to refine and improve the software to ensure the self-driving car performs safely.
They can ship a product that can drive under the conditions they're testing under. Which would be appropriate for autonomous taxi service in certain municipalities, but presumably not for the general market.
Granted, that's been their business plan for a while now.
My perhaps false recollection was that Waymo disengagements had flatlined 2017 to 2018.
If they really are doing this well with non-freeway driving, then that just tells me they could have released freeway driving about 4 years ago.
I agree with your statement about Tesla’s marketing of self driving. My Grand Cherokee isn’t so far off their mark. It has lane assist, adaptive cruise control with stop, parallel and back in parking. I’d bet Jeep could do summon and more stuff on the highway.
What self driving product do they have? Elon's "Oh yeah, we will have full self driving coast to coast in [8 months]" is horseshit to the point of "How is he allowed to say this on an earnings call?"
Other competitors, widely recognized to be ahead of Tesla estimate 5, more likely 10 YEARS. Coast to coast? Your Tesla will navigate a Pittsburgh winter storm on poor roads in Level 5 autonomy... this year? No.
So then we're with "sophisticated driver assist". Actual products you can buy with sophisticated driver assist are sold by: Toyota, Honda, BMW, Mercedes, Audi, Chrysler...
My Jaguar has: adaptive cruise, four way sensors, lane assist, including ability to prevent you from changing lanes if a blind spot obstruction is noted, adaptive front lighting and high beam assist, driver condition monitoring (detects multiple 'drift', lane departure events or sub-optimal braking), traffic sign recognition and adaptive speed limiting, rear traffic monitoring including the ability to avoid, if possible, full auto parking, and so on.
Those cars do not steer themselves. The difference between having to steer the car and not having to steer the car is extreme.
It totally changes the way you drive. Instead of looking down your lane and keeping the car centered, you scan all around you and keep situational awareness of the drivers around you while the car keeps itself in lane. It’s a totally different experience.
Assuming you keep your hands on the wheel as required what exactly is the difference? I only found it to be a distraction as it takes me considerably longer to reengage (in less than expected situations) as the human driver after a period where I just let go a little.
I don’t have any “reengagement lag” as you might say. When I want to start steering the car again I just start steering. There is a slight resistance felt in the wheel if you turn past the autopilot without flipping the right stick up to explicitly disengage AP first.
It’s significantly nicer to drive long stretches with this system, but I can’t imagine doing it without your hands on the wheel. When the sensors fail (weather or poor lane markings), you need to be able to take over immediately. Granting too much autonomy to the system is just too risky (and the point of the article).
Not trying to be rude, but that is nothing at all like driving with AutoPilot.
 - https://youtu.be/iKWBdySgssE
Tesla has effectively incorporated Lane Keep Assist/Automatic Cruise Control into a single product called Autopilot that has vastly better software than other similar products.
Simply put, if other companies had a product that would have allowed for billions of miles to be driven hands free feet free, they would have it, but they do not, not even close. Tesla does.
I'm not sure the last time (actually I am - it has not) my car treated a roundabout with a shallow center median as "entirely optional, follow the circle or go straight over", but I can find multiple videos of Teslas doing that.
The two cars I've driven that are modern (2017 and 2018 Audi and Jaguars) also didn't hit a big white pickup truck in broad daylight in parking mode, but I've seen Teslas doing that, too.
So, how are we defining "bunch of worse products"? "Less advanced, but better tested"? If that's the case, I think I'll take that.
Do you actually drive your car hands free and feet free for more than 30 seconds ever?
Fairly certain in many cases that's illegal. And apropos of anything else, doesn't Tesla itself require steering wheel contact/torque every 15 seconds?
Is it? This implies that, contrary to what you say, most other manufacturers do offer equivalent features in equivalent luxury vehicles. Tesla might have gotten beaten many of them by a year or so, and might sell more Xs than Toyota sells high-trim Lexus's, but neither of those things really say anything about the quality of the driving technology. And from what I've seen, Tesla's isn't particularly spectacular.
I have a self-driving automotive product you can buy. It doesn't work as advertised, either.
Calling a lamb's tail a leg doesn't make for a five-legged farm animal. This reminds me of the conversations about, say, phone authentication using facial identification. "Samsung had it before FaceID!" Yeah, I had one of those Samsung phones. It did not do what it said on the tin. So, yeah, Tesla has a product you can buy, but it is not a self-driving vehicle despite what they might tell you.
It's not full self driving but it definitely is driving it self and providing value.
Stop coming at me with Waymo is 10 years ahead of Tesla... Ok great, are they shipping & delivering any value to paying customers? Waiting.......
In 5 years time, do you really think Waymo will have paying customers and have gained regulatory approval? Hell no...
Tesla at least realizes they can't get passed the regulatory hurdle. Keep driver at the wheel, provide a value, automate and refine.
The best thing is that as much as people want to argue on the internet about "which is better", it is somewhat inevitable they'll both succeed in their own way. Waymo will eventually crack the nut for Level 5 autonomous driving with Lidar just like Tesla will crack the nut for their full self driving (defined as driving from the east to west coast without any involuntary disengagements) with machine vision. I actually (potentially incorrectly) suspect Tesla will win this war first as they simply have a multiple of any competition when it comes to amount of driving data. How many more hundreds of millions of miles has Tesla autopilot logged compared to Waymo? Reminder: Tesla is building ~5,000-6,000 cars per week and tends to sell them pretty well. Their data moat only continues to widen. Once they figure out the right models for really pulling it off, they'll make HUGE leaps with the data they have. Some nights, my Model 3 will upload 10G of data up to Tesla. Hopefully, it is from Shadow Pilot to train the fleet.
TL;DNR: There is nothing stopping both Tesla AND Waymo from solving full self driving using competing technology and different approaches. It is an exciting time to be alive!~
I call the same thing you call it, apparently.
And if you don't mind me asking, how would you quantify in real money the value added by the system?
I wouldn't quantify the value in real money. I would say the main benefit provides is that I have much less decision fatigue and am less stressed after my commute. I also don't care how fast I really get there anymore since I just optimize for less stress, and my trip is enjoyable.
The other main value I commonly see overlooked is safety from distracted people. If we can assume people are going to keep texting and driving ( they are ), would you want them to do it while actually driving or with this system engaged?
Then you're not engaged. It's scientifically proven that your brain is no longer alert if you are not constantly engaged. And you can't be magically less stressed by decisions while being fully engaged and alert in order to verify each of those decisions. It's like saying a calculator makes you life easier because you no longer have to calculate yourself to get the result, you only have to calculate yourself to verify the result.
You're either less stressed because your mind simply offloaded tome tasks to the car contrary to the law and common sense, or you're actually engaged and your brain is doing what any driver would normally do so the whole effect is placebo. Which is cool I guess but this last one can be had for a lot less.
Driver assists are there as a backup, not to rely on them to do what you no longer feel like doing because it's too stressful. And if you feel less stressed letting a pretty dumb AI (compared to a human driver) with poorer eyesight than yours to drive you around you must be a pretty laid back person anyway.
...Then what is the point of that system?
Tesla's 'self driving' is incredibly dangerous because it puts the driver in a set of conditions that make it very difficult to maintain attention on the road. And yet the driver must be able to take over from the 'self driving' system with subsecond warning if things go awry.
That's not value adding, that's a huge risk. I would not use that system even if someone offered to install it for free in my car.
Tesla's autopilot is a marketing twist on lane keeping adaptive cruise control - a technology available in almost every car at a similar price point. It is not a step towards actual autonomy.
1) Self driving tech doesn't exist today. It simply does not. There's lots of people working on it and they are using different strategies, but it is not clear how long it will take, which strategy is technically superior, which strategy is more economical, etc etc. There's a lot of very strong opinions floating around (LIDAR is essential! No, vision and lots of data! No, 3D maps or bust, etc etc). It is important to keep in mind that we don't know the future, the people busy inventing it don't know how things will play out and outside observers know even less. The tech industry is littered with strong opinions which have aged terribly.
2) Musks's first principles aside, Tesla was never going to do LIDAR. They're in the business in shipping real cars to real people and there was never any possibility of mixing that with LIDAR. If they had gone with LIDAR and started a research program like Uber, they would have started years after Google, with less data, far fewer resources and absolutely technical advantage. It is far, far more likely that this program would have bankrupted them than beat google. In essence, whether Musk's first-principles is real or a sales pitch doesn't matter. Tesla's choice was their current approach, or nothing at all.
Can you explore that a little? We have consumer products that contain LIDAR already (e.g. Neato Vacuums). The sensors no longer cost thousands of dollars, and could supplement other types of inputs to provide a broader picture of the road.
Cruse control requires you to pay close attention etc, but you can avoid making a lot of minor speed corrections, which makes long drives less taxing. Adaptive cruse control takes this one step further again reducing cognitive load. Lane following might not seem like much, but represents a huge number of tiny course corrections.
I could go on, but even if you can’t read a newspaper the reduction in stress and muscle fatigue really adds up.
Second, these systems are adding redundancy. As both the system and human driver needs to fail which means someone that’s only paying attention 98% of the time can still cover for 49 out of 50 times the system fails. Current data suggests a net gain in safety which is likely to improve as these systems mature.
Seems human nature to watch the car avoid 99 problems, and fall into a false sense of security and stop paying attention. Sleep, cell phone, talking to a passenger, or just day dreaming seems ever more likely the longer you are letting the car drive.
It's 2 people so far that lethally broadsided an 18 wheeler with a tesla on autopilot?
So an attentive human + FSD sounds great. Not sure humans are going to be able to stay attentive though. Sure at some point inattentive human + FSD will be safer than a stand alone human, but it doesn't seem like that's happened yet.
Telsa seems a bit evasive with the safety data. Maybe it's because human with autopilot off (but safety braking on) is just as safe as human with autopilot on.
Also... if you bought full self driving you're going to get the new v3 hardware retrofit... that's what you paid for and it's going to be better... probably a lot better.
(edit: removed link... just google it)
Except according to Elon Musk (whom I don't believe) - it had better, since he's promising that it will be available this year.
In a car, Tesla's Autopilot simply doesn't give you that. If you are reading and your autopilot exits, you could be dead before your focus can return to the road. Planes fly in wide open spaces far from hazards. Cars drive in busy, tight spaces where a fraction of a second lapse in control can be fatal.
I get that Tesla's implementation is analogous to what you find on a jet. But the environments make them very different. So start by dropping the name. It's not auto pilot, it's handsfree driving.
> if it needs you to intercede there is pretty much no emergency in which you wouldn't have a few seconds to react.
Traffic avoidance is certainly one. Airspeed dropping while the autopilot tries to maintain altitude is another. This can result in a stall and complete loss of control under the wrong circumstances.
Autopilot does not mean "read a book and let the plane fly itself." You are always cross checking instruments, visually scanning for traffic, running checklists, rehearsing your next steps, etc.
People have some responsibility to educate themselves about the products they’re using.
The definition of autopilot has been this way for decades. You could literally ask anyone of any age what it means and everyone would have the same answer.
There was nothing OK about Tesla calling their system "Autopilot."
This happened to me in X-Plane (sim) the first time I used autopilot. I had dialled in a rate of climb the plane couldn't maintain. Very hard to recover from a stall when the elevator trim is set fully nose up.
Source: Have private pilot's license and flew the Shadow 200 TUAV for the US Army. It is a VFR rated airframe that is flown 100% IFR, by nature of it being an ~11ft x 3.5ft x 2ft drone.
How many of those VFR flights have/use autopilot during their flight, I can't say, but basic autopilot systems for GA planes have gotten pretty affordable, so I would expect there's at least hundreds of flights any given day where a VFR pilot engages an autopilot during the flight.
The main difference is that none of those VFR flights are big airliners with hundreds of passengers. For those who don't know the difference, an oversimplification would be VFR = pilot is 99% in charge, with very little assistance from air traffic controllers outside of special controlled airspace, IFR = controllers are giving the pilot very detailed instructions for the entire flight. And it's pretty much one or the other.
 PDF: https://www.faa.gov/air_traffic/by_the_numbers/media/Air_Tra...
And they do so for the general public not just for pilots. And so it's appropriate that you use the general public definition of autopilot. Which is a fully automated, hands free system.
Yes, exactly. It's a level-2 ADAS - it's about helping you deliver correct inputs to the car, not about autonomous driving in any real sense - regardless of how advanced it might be in a technical sense, or how well-behaving in a best-case scenario! More like the fly-by-wire system in an aircraft than the "autopilot".
And I know that the users are supposed to be aware of this, but Tesla's marketing on this issue is far from consistent, in all sorts of ways. They're doing their users a pretty big disservice.
Real autopilots don't work like that, they are typically extremely simple and in many cases, mechanical PID loops. 99% of all autopilots have almost zero intelligence nor electronic sensors of any kind. Only the best of the best, like CAT3 autoland systems are anything close to what you see in the movies.
Over a hundred years of the general public being educated about what that word means. So pretty sure it predates its use in movies.
Ethically discussable, but that's how things seems to work.
Maybe Tesla will get to a point where my TM3 will drive me to work while I nap, or use my phone. One thing that’s pretty amazing is that if they can get to that point, it will come as a free automatic software update or an available for purchase hardware upgrade on my current car.
This month my car will be getting 5% faster acceleration from an OTA update - how cool is that?
But right now, the reality is that I can engage AutoPilot on the highway and it immediately and dramatically changes the driving experience. Instead of focusing on steering the car, I am focused on situational awareness. I am not just looking down my lane, I am looking at the cars around me, who is passing who, what might be coming up around that bend, that guy on his phone next to me, etc.
I can look at over drivers and assess whether they are paying attention and avoid them if needed.
Because the car is actually lane keeping — not what everyone else calls lane keeping which (surprise) is a total lie - but actively steering the car in the lane around curves and centering the car in the lane probably more precisely than I would be if I were steering manually.
I strongly believe, if AutoPilot never truly advanced much beyond its current capability, and as the currently functionality becomes more widespread, we will wonder 15 years from now, how did people operate their vehicles without this level of assistance? How could you be properly aware of your surroundings if you had to be so preoccupied with minor steering inputs?
If you use your phone and watch a movie or put on makeup while driving, you are breaking the law and endangering yourself and those around you.
Every time a new technology comes to cars people say it will distract, or hypnotize, or lull drivers into distraction (see wipers, radios, automatic transmissions, original cruise control) and anyone who has actually driven with AutoPilot knows this feature is no different.
As a Tesla owner I enjoy and appreciate Tesla’s real-world approach to self-driving and it makes my life better and my drive safer. Thank you Tesla.
You are doing all of the driver attention work, but someone who activates Autopilot isn't required to. I think a lot of the argument against Tesla is that Autopilot isn't doing enough in that arena.
GM's Super Cruise is a lot more feature-full with regards to driver attentiveness. It might not be as useful in terms of being able to be used everywhere, but its definitely more well-rounded in terms of forcing driver attentiveness.
Tens of thousands of people a year fail to do exactly that. Tens of thousands of fatalities every year, not to mention maimings and injuries, countless billions of dollars in property damages and lost wages, because humans generally fail in their legal, and ethical responsibility to operate their motor vehicles responsibly and in conditions they could handle.
There have been a few cases where Tesla drivers have made the same failing. Sometimes the AutoPilot feature was engaged at the time. Sometimes they were just pushing the vehicle too hard and ended up on the wrong side of physics. The Tesla is stupidly fun to drive and sometimes I drive more spiritedly than I should. Being too fun to drive could get people hurt or killed too (and it has)
I strongly believe a responsible driver is more safe with than without an AutoPilot feature, mostly from my own personal experience, as the data point I used to cite is somewhat controversial.
Just like the amazing cornering and torque of the Tesla can be abused and even lead to fatalities when pushed too far, so can the AutoPilot.
I personally think it’s a mistake to make the feature significantly less useful to a responsible driver to try to possibly prevent these edge cases. If it were possible to make the attentiveness features entirely non-intrusive than absolutely they should be added. But in reality the attentiveness features are already intrusive to a responsible driver and detract from the experience.
Interestingly Volvo seems to be going in a different direction. They believe as the manufacturer it’s their responsibility to create a product that even a human attempting to operating irresponsibly or illegally should be kept safe. They’re adding hard speed limits well below the functional limit of the hardware, and contemplating even systems like breathalyzers and fatigue detection which would entirely disable the vehicle.
I personally don’t want to live in a world where every product I use is sizing me up and deciding how I should use it, whether it’s a chef’s knife, jet ski, automobile, or semi-automatic.
In the meantime what I love most about AutoPilot is how it can only possibly get better over time, and every single car in the fleet is benefiting from that. That’s as long as the regulators don’t fuck it up.
The attention problem is well known in engineering.
It is very hard to get a human to concentrate on something
that will turn up good more than 99% of the time,
even when there's serious or fatal consequences of failure.
Trains are the classic example - tracks are amost always clear,
signals are almost always correct which means you have to
devise all sorts of systems to keep the driver alert.
Waymo's ultimate self-driving vehicle will not have a steering wheel (as seen in their early prototypes) as the vehicle will take human out of the driving equation. Tesla will probably always have one in the foreseeable future (as they are tackling autonomous driving an drive-assist system). That's the big difference to me.
It does feel a bit pedantic to say that though. If Musk is sometimes delusionally optimistic instead of intentionally deceptive, how much does it matter to us? I guess the level of delusionalism does vary some - Tesla might meet vehicle production targets, SpaceX might meet their intended date for the first crewed mission, but a Tesla car with current production hardware driving coast to coast while the driver reads a book sometime this year has no chance whatsoever of happening.
Also worth considering that the Silicon Valley VC universe does tend to reward people who could be described as being delusionally optimistic, and thus encourage them to continue to think in that way.
So there is clear wilful intent combined with actual deceit. Sounds pretty clear cut to me.
He promises the future before it has arrived, but unlike Theranos, he actually delivers a product. That product might be inferior to what was promised, but it is an improvement on the status quo nonetheless. And once released, Musk does not lie about its capabilities.
That statement certainly isn't true for any meaningful value of 'self-driving'.
> Summon: your parked car will come find you anywhere in a parking lot. Really.
Except that feature hasn’t shipped and there’s no ETA. I call that a lie. I appreciate what Tesla is doing to the industry but I wish they wouldn’t be so dishonest in their marketing.
It has a limit of 150 feet and I’m sure in practise it will just annoy other drivers unless the parking lot is fairly empty, but there you are.
Telling people that the actual physical car they are currently purchasing from his company will soon be capable of autonomous driving when, in fact, it isn't, and won't ever be, seems to be a pretty straightforward contradiction of that statement.
Also that's literally just your opinion that it won't soon be capable. It's definitely Musk's opinion that it will soon be capable. Mine too.
Given that the autopilot system has been evolving for five years (introduced in 2014 for Tesla) and still has problems like "aims at water barriers and accelerates", "fails to recognize that trucks may be more complex than simple cube in space", "fails to recognize that vehicles of a certain color may be hard to see when driving towards the sun"... and that will all be solved "this year" (t minus 9 months and change) seems...
supremely optimistic. I was going to call it naive, but that's an insult - Elon certainly knows far more about self driving than I do.
But this is the equivalent of man kind making fire, making the wheel, and then turning that into a car as the next logical step, ignoring all the interim.
But they've been claiming that the cars people already have will, in the future, be able to be autonomous.
Those statements are most accurately described as lying.
His sins may be less than those of the CEO of Enron, but that's a very low bar (after all Enron was the largest corporate bankruptcy ever when it collapsed), and people have gone to jail for much less. I get the hero worship, but it doesn't stop Musk from being a criminal.
Except for obstacles that coincide with whitelisted locations their sensors can't handle. It's only a matter of time before someone dies because of that hack.
That said, I don't think Tesla are wrong. If Tesla can deploy MobilEye-level automatic visual mapping (see https://youtu.be/GQ15HWCw_Ic?t=1381) this can obviate the need for LIDAR-based localization and dramatically improve their perception systems (by having very good prior information about all static obstacles, such as lane dividers).
Dynamic obstacle detection will still be worse than it would be with a vision+LIDAR approach, but not that much worse. Having superhuman ability along two axes (map-based priors and alertness / reaction time) is likely sufficient to drive better than humans, even if several remaining axes (dynamic obstacle detection and prediction) are worse.
Additionally, Tesla is not (AFAIK) using any structure-from-motion (https://www.youtube.com/watch?v=KT2KsN7yKo0) or stereo-vision (https://www.youtube.com/watch?v=SskSDjUG8ZY) techniques, but there's a chance that these could also improve perception, especially in tough cases where a single frame is not enough for good detection (e.g. white semi in sunlight).
LIDAR is not being used in the (compared to Tesla) small-scale AV systems of Waymo, Cruise, Aurora et. al because it's essential, but rather because it's convenient, and because the companies producing those systems want to give their AVs every advantage at any cost. I fully expect Tesla (and MobilEye) to do superhuman self driving without LIDAR, but it will take longer (i.e. not by EOY).
If Elon really thought "in theory, camera-only full self driving is possible, therefore I should invest resources in that approach" than he's dumber than I thought. He's done it because Lidar was not an option - and the claim that FSD is coming soon was an intentional lie.
So Personally I would feel safe in a camera only car driving during the day in good conditions and with good road markings or on a controlled road where humans are excluded. I would not feel safe in any other situation..
I still want a model3 it is so good in every other way but I wont be trusting the self driving.
What's your evidence of intent?
Similar might happen to vision-only self-driving. It might take a decade or two longer to develop compared to LIDAR-based approaches, and meanwhile LIDAR is only going to get cheaper.
Sure, but only about 0.000014% of that time occurred while cars existed. It's not like we evolved to control big metal objects hurtling down artificially made flat surfaces.
A computer doesn't get distracted by a text message. A computer doesn't get drunk. A computer doesn't day dream.
Cases where a crash happens and the person at fault says something like "I didn't see them!", it's usually because they didn't even look, a mistake a computer won't make.
What we can measure is fatalities
Human brains, which are general intelligences, exist, and are collections of atoms.
I'll happily, for $500,000/copy, upfront, promise to sell you artificial general intelligences. I swear I'll get them built in the next two years.
There is, after all, no first principle reason for why my promise isn't worth the paper it's printed on. Human brains are collections of atoms, so we should be able to, out of atoms, build artificial general intelligences.
Often when I'm stopped at a light, cars that are standing completely still will appear to be constantly moving forwards and backwards on the display. My best theory so far is that the Tesla's spatial model is getting thrown off by the other car's turn signal. This does not inspire confidence.
In general, I find the radar-enhanced cruise control very reliable (so nice in bad traffic), but autosteer is flaky at best.
Which is great but hardly unique to Tesla. Every major manufacturer out there offers adaptive cruise control. My car even recognizes the difference between in traffic stop-and-go/rush hour, and "queue" mode (exiting parking lots after events, etc).
I believe that stereopsis (multiple cameras using parallax to solve for per-pixel depth) is necessary to get a practical, well-functioning self driving system working. LIDAR is just too expensive and not good enough, but stereopsis is extremely flexible and can have extremely high angular resolution.
Combine naive stereopsis with temporally-coherent sensor fusion (e.g. a well-designed Kalman filter), and I think you could have very robust ranging. Humans are already very good at this with two narrowly spaced eyes (stereopsis to 1/4 mile range is not unreasonable for a person)-- but a car is not limited to a 1.5 inch stereo baseline; it could have a stereo cameras on opposite sides of the windshield. That would hugely increase the depth sensitivity, even at moderate resolution-- parallax can be detected well below the Nyquist limit (since Nyquist cuts off frequency, but does not destroy phase).
Tesla is totally failing at even the basic level of environment awareness (c.f. cars which have been driving into exit dividers), which is what I consider to be the easy part of self-driving (the hard part is getting the machine to participate in a nonverbal social environment, which is what the roadway is). Rumor has it that Teslas can't detect obstacles far enough ahead to avoid them at more than 30mph-- absolutely abysmal, if true.
If it were me, I would put cameras on the Tesla windshield in this pattern:
Tesla has only three cameras with different FOVs and a very narrow baseline-- I doubt they are doing stereopsis. And I don't think they can do the job without it.
The article seems to extrapolate that Tesla has failed in autonomous driving because it removed some lines from its Autopilot page. It's still very much in progress and progress is routinely confirmed by the company. Musk is known for blowing timelines but they do get delivered.
When it talks about "old approach" I'm again confused. No one else crowd sources driving data from a real world fleet. They have a unique unsupervised learning datasource from shadowing drivers.
As for attentiveness. I don't see how drivers not paying attention is any different to mobile phone use. Currently Autopilot is very clear on telling at every chance its an assistant. Saying it will kill people is like blaming phones for drivers texting while driving and getting killed. These people die because they are breaking the law and not in control of their vehicle.
Yep, hence why this article is just clickbait for Tesla haters. Every Tesla owner is fully aware that it's not self-driving in the sense that you can take a nap.
That being said, two thoughts.
First, if they want me to buy it then demo it in passive mode in my car. That is, there is space where the current speed limit is shown. Use that space and below that to show what signs and signals it has seen recently and the order of importance. Currently it does not see speed limit signs and it that is wholly FSD territory then Tesla is overcharging compared to other systems.
Second, just at a stand still cars around me jump and I am not sure it sees stationary cars when I am driving. The best example I have is a two lane road through a subdivision I usually take, the outer lane is a long turning lane but people tend to stop there to let kids out for the water park. I cannot recall my car showing a car when someone is loading/unloading kids but it does see cars moving in that lane when I over take them. So is it just going by too fast for the stopped car in the lane over to register? It knows its a lane. I am not sure but I will wait to see how it develops.
And I definitely think Computer Vision will get better before LiDAR gets cost competitive.
However, suppose the opposite: Suppose LiDAR gets cheap first. Like, I don't know, $1000 per unit and compact enough to not be a big burden on the rest of the vehicle.
Tesla charges $5000 for the FSD add-on. In principle, they could easily afford to retrofit vehicles with the cheap LiDAR plus maintain any advantages the full computer vision system would add (such as identifying vehicle types to assist in predictive behavior modeling and avoidance strategies).
So trying to go the full Computer Vision (with ancillary radar) route actually has pretty low opportunity cost for Tesla.
Another thing: I wonder how much simply much-improved geolocation could help? That failure from last year of driving right into a (failed) collision attenuator (where an off-ramp split off) could've been avoided simply by having high confidence, half-meter-or-better geolocation (from GPS, cell, IMU, etc fusion) and good mapping. And with good mapping/geolocation combined with other sensors, the sensors could focus on identifying changes to the expected road condition, perhaps increasing their robustness. (and when networked, they could use sensing from other vehicles ahead of the current vehicle to assist in navigation as well)
Ideally, you have a mapping system that is constantly updating (every time your fleet drives by something new like construction on the road it auto-updates the map).
Comma.ai is working on a vision+GPS Kalman filter solution along these lines up to 10 cm accuracy. I'd guess Tesla is as well.
And then yes, after that the vision would be primarily for localization.
And what does it meant to be "practically right" about the future? I don't understand how you could be right in practice about something that hasn't happened yet.
a) fold it out when you put the car in reverse, so it's always ice-free
b) ignore this problem. It's covered in ice for months.
This is a simple problem in comparison, and neither of the above solutions would work for a camera used for autonomous driving.
And if it doesn’t keep itself ice free it will risk going from uncovered to covered in seconds. Cameras need to be ice/snow/rain proof in the sense that they stay clear of it while driving.
That's probably the crucial point. For now, lidar is needed for safe operation, and too expensive for mass deployment in private cars.
Reading Judea Pearl's The Book Of Why certainly sobered my outlook on AI.
I’ve been driving for 40+ years, millions of km, 1/2 during winter.
Now, Elon is a Saskatchewan boy, so he should know better. Perhaps he’s forgotten. Winter driving is at best 1/2 vision dependent, often much less. Many times, you have to
actively ignore your vision (everything you see is “moving” sideways). Quite often, you’re modulating your throttle to maintain a tiny ratio of +/- acceleration that maintains static friction; as soon as 1 or more contact patches achieve dynamic friction, you’re entering a spin due to yaw forces induced by your other driven wheels.
Much of the time, the road surface adhesion characteristics are best detected by sheen, vibration, sound and guesswork/prediction based on temperature, sunlight, recent weather, etc.
Unless the vehicle has sound, vibration, and vision sensor integration — it cannot hold a candle to human driving, except in the most trivial driving scenarios.
Winter is a complete non-starter for any autonomous driving technology that I am aware of.
I don’t know if Lidar would really help much. It can’t punch through heavy rain/snow, at least not as well as radar anyway. Maybe it would help in trivial driving conditions. I think vision, sound, vibration, and radar with a powerful ANN trained by professional all-season winter drivers in challenging conditions might be useful, and eventually even good. It should be able to perform super-challenging evasion and recovery maneuvers that could save lives!
Why? The car can easily detect slipping. It can upload camera data every time any Tesla ever lost traction. You think machine learning won't be able to correlate those signals?
Certainly some problems are way easier for us, but that problem seems way easier for the Tesla fleet than for a human.
So often I see a giant line of cars behind the plow, slipping and sliding along, then I pick the deepest snow lane with the most fresh snow and it's just fine.
Generally it's not the lane you pick, but the safe speed for that lane. Sure that might vary per lane, but with today's sensors and electric motors the autonomous driver should be more accurate at quantifying that than a human. After all a computer could easily say apply 50HP to each wheel (one at a time) for 10ms or similar to quantify where the dynamic/static threshold is.
If that's the case, why do cars have horns?
Or maybe they've gone down a blind alley in their rush to look like they were first to market.
That's what Zipline uses for their fully operational medical delivery system in Rwanda.
The drones are completely blind, directly, but they travel in pre-defined flight paths and communicate with one another and follow directions from flight control.
For self-driving cars, a similar situation may be extremely good GPS geolocation combined with network-wide sensor fusion and mapping. In principle, you could even do real-time optical or synthetic aperture radar from orbit or via persistent aircraft to allow the central controller to identify hazards and obstacles and to update maps in real time without much at all happening on-board except low-latency reactions. Most highways already have much of their length covered by cameras for the local Department of Transportation, and even many intersections and sidestreets have surveillance cameras. A low latency connection to that system, upgraded with higher fidelity, could significantly help autonomous systems. Might be another very helpful public good for cities to provide, much like GPS is provided.
...and on the engineering side, you can have vehicles designed specifically to reduce pedestrian injury, such as not having those large, boxy grilles on SUV. Such things are almost entirely cosmetic and reduce efficiency, so Tesla doesn't have them on their cars. Going beyond that, external airbags may help as well. Zipline uses a foam body and a fail-safe parachute to stop the vehicle if there's a problem.
And BTW, there are 4 GNSS systems, each run by a different country/entity. The odds of them all failing at the same time is very small.
But even before that's complete: High precision GPS (i.e. with compensation for atmospheric effects) can get better than 10cm accuracy by tracking the carrier wave: https://en.wikipedia.org/wiki/Real-time_kinematic
Single-digit-centimeter-accuracy is possible near GPS signal compensating base stations (~20km). This level of precision is regularly used by surveyors.
There are several options for improving geolocation beyond inside-out (i.e. vision or lidar) tracking.