The inputs seem to be road line recognition, optical flow for the road, and solid object recognition, all vision-driven. Object recognition is limited. It doesn't recognize traffic cones as obstacles, either on the road centerline or on the road edge. Nor does it seem to be aware of guard rails or bridge railings just outside the road edge. It probably can't drive around an obstacle; we never see it do that in the video.
This looks like lane following plus smart cruise control plus GPS-based route guidance. That's nice, but it's not good enough that you can go to sleep while it's driving.
I suppose once statistics start to prove that these cars are safer than human-driven ones, we can chalk it up to an irrational fear, but for now it seems crazy to me to put my life in the hands of an AI, when a mistake means that I die or kill someone, rather than play the wrong song on Spotify.
I don't fully understand why more effort is not put into a hardware solution, where roads are simply marked up for self-driving vehicles, e.g. magnets lining the lanes or something like that. Of course a more expensive solution, but seems like it would make the vehicles themselves a whole lot simpler and safer. Begin with inner cities, where the area is limited and traffic is most complex.
Do you not drive using only your eyes? If you are not terrified of the sensors, then software? Turing's central belief was that the human brain was 'just' a computer.
Regards doing things like embedding reflectors in roads and other ways to simplify lane holding, completely agree. But we can't forego the cameras etc that deal with situations that don't contain reflectors.
I suppose you could make ML avoid sharp changes in recognition, but I don't trust the current neural-network models to do so reliably.
This is the same thing..
The truth is people really really stuck a driving, but it's easy enough that the vast majority of mistakes don't result in any problems.
For a naive definition of "vaiable" that does not take into account other people that need to use the roadways. A car stopping or even slowing drastically in a lane will have the same negative affect on the transportation system as a minor accident.
Slowing for a turn then deciding to continue on, merging without checking blindspots, running red lights, speeding next to the short on-ramp lane, not following a yield to right turn sign, ignoring people in crosswalks, etc. All of these things happen all the time, I see at least one example every time I go out somewhere.
Now, we could get into hypothetical situations, but the reality is minor accidents are already really common like with so many other things machines can be better than people without being perfect.
I posit now: "as the problems with automated cars become evident, solutions provided will tend towards Skynet as t increases."
There's plenty of conditions that are more difficult to humans: cars that are painted with the same color of asphalt seem to disappear in highways. Also, fog.
Also, I'm more worried about temporary, unexpected configuration of things that suddenly make a NN see something weird.
Humans don't see elephants suddenly because a prior is telling them that an elephant is improbable in that context.
Something like that in a computer system shouldn't be above the current capabilities.
 - http://www.evolvingai.org/fooling
As a human, if you asked me to match these synthetic images to a most-likely real-world match, then my responses will also look "confused."
Or when the graffiti becomes intentionally malicious (thinking of a variation of Wiley E. Coyote's tricks with paint and solid walls)?
I don't understand where some people get the belief that automated vehicles will suddenly become targets of murderous or sociopathic pranksters?
There is precedent when it comes to the automation of people's jobs (truckers, cab drivers, courier services) and fanatical condemnation of technological advancement; see everything from the Luddite's infamous wooden shoes to the spiking of trees to stop logging.
In the forests near my hometown, some environmental fanatics will string up wire across motorcycle and ATV tracks, or create blockages and attack riders with crowbars and pipes. I wish I were kidding.
As for current graffiti artists - they're taking down stop signs and road markers, defacing handicapped parking signs with stickers, tagging any and all road signs, and so forth. They're minor nuisances for human drivers, who can usually easily identify compensate for such changes. Computers, even those running neural networks, are not able to compensate so easily.
yeah, instead you get hallucinations, muscle twitches, seizures, mental breaks and many more ways a brain can fail to do it's job.
A computer isn't going to look down at a text message on their phone, fall asleep at the wheel, or get in a car while intoxicated. All which occur literally every day. People take driving for granted, because (at least in the US) we have to. Our sprawling suburbs couldn't function without it, so the barrier to entry is simply being a teenager.
Would I completely trust Tesla's auto-driving? No, not at all. But I at least trust them as much as I do an average driver, which is still just barely. The difference with Tesla's system is that one failure leads to improvement of the entire fleet, whereas one human failure maybe improves that one human driver. That means self-driving cars will continuously improve, while humans will just continue to be terrible drivers.
Whereas humans get the same kinds of fatal crashes over, and over, and over...
The Australian government has a "Black Spot"  program to provide federal funds (often millions) to fix up locations with a history of crashes. Bet it's cheaper to make those fixes to software.
For a human, the noise isn't bad recognition, it's corrupted sensor data via distraction. It's one of the most common causes of accidents. For the AI agent an elephant will never appear unless it's been trained to be recognized. More likely it's recognized as an obstacle, which is ok. But keep in mind that years of Google's autonomous vehicle success make this pretty improbable, certainly less than in the average driver.
Much of the fear here is irrational and natural when humans lose control.
I was just answering about the concerns of self-driving systems relying solely (or primarily) on visual data from cameras. It's true this is how humans drive, but human visual system is much more complicated than what the current (published) state of the art in image processing seems to be. I do not trust visual-only systems today (especially if they employ deep learning shenanigans).
I know those problems aren't insurmountable, but I'd feel much safer if they threw in a LIDAR there too.
You'd be surprised. All kinds of conditions can do really weird things to your vision. Migraine auras probably being the most common.
If you take the example of Chess, humans and machines play very different styles. Computers replace predictive understanding with excellent memory and fast search.
Similarly for cars, we want to replace the lack of a powerful predictive model with some compensatory mechanism, such as sophisticated sensors. The problem is that, even with faster reaction times, better sensors and fuller awareness, these cars' compensatory abilities against a lack of powerful predictive models are still far from sufficient.
At this point in time, a car with more sensors and failsafes to augment the human against collision is safer than a self-driving car that depends on a lack of lapses of attention. That depends on proper behavior in the customer. Even dividing tasks, with the human controlling the steering (but allowing the car to takeover when highly certain of danger) and the car controlling acceleration would be safer. This set up acknowledges the human propensity to mind wander when attention is not engaged.
Until self-driving cars no longer require human rescue in difficult situations, one can't rightly say that self-driving software are better drivers than humans.
Sure, it's worse in many other ways.
Another thing is that your eyes can last you 100+ years, technology is not that durable and breaks more often, the objectives in cameras get dirty etc.
The cameras typically aren't as good as human eye, for example there are reports where Tesla wanted to jam into a truck in front only because its back in the camera couldn't be differentiated from the sky.
Yes, relying on cameras only is perfectly fine when you play PS4 game, but when life is on the line you want extra safety checks. Kind of like cars have brakes on every wheel and then also a handbrake. How every car has minimum of 3 stop lights (how often you see that one or more lights are bad in other cars?)
Your belief that we have hard AI (and this is kind of indirectly created by companies like Tesla, Google, Amazon, IBM etc) gives you conclusion that just two cameras should be enough in reality we did not made much progress in that area and instead just concentrated on specialized AI that is capable solving specialized problems, that kind of approach absolutely works better with more sensors.
This situation generated constant lane departure warnings for the driver.
How would "magnets" be any improvement over visual markings? Is there some sensor that can track magnets at greater distance, or with greater reliability than visual markings?
Besides, how many millions of miles of road would need to be equipped with these magnets around the world? What would that cost? Would it be acceptable that autonomous cars could only drive on roads that have been properly equipped?
It's worth remembering that self-driving AI does not solely rely on lane markings to navigate. It also can see the road edge, follow other vehicles, and cross-reference with a (learned/crowd-sourced) navigation model of the laneway. Additional sensor types, like LIDAR and radar, are often used in addition to cameras.
You can see a lot of this at work, in a complex London traffic environment, in this demo:
Volvo is plugging for magnets in roads in Sweden, so they can find lanes in the snow. It might happen, because it can also be used for snowplow guidance.
In heavy-snow areas, posts, poles, and even over-road arrows (Japan uses this in Hokkaido) are placed to provide guidance. It's not intended to replace vision and LIDAR, just as additional guidance hints for bad conditions.
It wouldn't need to do anything but "ping", the battery could last for years. You could even have a mechanism to disable it when the lines are repainted, though normally roads that use the existing version of these reflectors would have them removed when moving lines.
The transmitter would have to be powerful enough to go through a small amount of snow in some states.
The question is can we get the car companies to agree on a standard for this? Would it make sense to put it in the frequency range allocated for vehicle radar systems?
What we get instead is an attempt to define the whole system de jure, including full V2V protocols and specifications.
OK, RF would mean the lane geometry can be "seen" through all weather conditions (fog, snow). But so does precise-enough mapping/fleet learning. Neither would help in detecting traffic and transitory obstacles that might be hidden by that fog - you still need other sensors for that!
A "safe" autonomous vehicle should perhaps simply slow down in adverse weather conditions just as a human driver would. Even without additional sensors, their ability to see through snow and fog is certainly no worse than a humans.
Reading road signs optically is a solved problem. Mobileye and similar systems have been doing this for a decade or more.
Further, it would help learning in limited visibility, although they can already do it by remembering locations of signs (people do that, do autonomous cars store city/state/worldwide road data?)
I'm not sure that's a fair analogy. Without the canal network to cheaply transport goods and resources in bulk, it's hard to imagine how Britain could ever have industrialised to the point where building a railway network became possible in the first place!
Besides, railways did not immediately obsolete the canals as you seem to suggest. Some canals were still being built and extended until the early 1900s (70+ years after the railway boom), and the last commercial freight services lasted until as late as 1970 - suggesting it was really motorways that "killed off" commercial canal traffic.
One key point of difference is that (like road freight) pretty much anyone could operate a freight service using canals, where as the railways were in the hands of a few huge corporations. That meant a lot of niche, point-to-point freight was more viable to transport over water than rail.
They probably use GPS+map for a low resolution location and trajectory planning. Vision should only be used for fine positioning and obstacle detection.
It should probably be both to avoid moving goal posts. That would make even chess algorithms still AI.
Self driving cars and image recognition are both definitely still AI.
"AI" does not typically mean strong or general AI.
This is Los Altos - Los Altos Hills - Palo Alto route to Tesla's HQ on Deer Creek Rd. A very expensive area to live and therefore well maintained.
That's a bit of an overkill.
Regardless, there was two points in the video ~48 sec, and ~1.40 min/sec mark. The car stopped AFTER turning... I thought that was really strange behavior. Then when it passes the pedestrians, it almost went to a complete stop, instead of going slightly to the left and mildly slowly down.
Overall, it's a great case study. I think your criticism is technical snobbery instead of what this means for the average person. The future is here, get ready. When techies complain that someone is doing something with 'x tool' and not 'y tool'...I think those people are just throwing mud around and not actually contributing to the conversation. Build something better if you're going to nitpik over the technical details. It would serve you better and all of humanity.
Here's Tesla's "Autopilot" killing someone. This is the Mobileye "does not recognize obstacles protruding into left edge of lane" bug.
Here's another crash from that defect.
Here's Tesla's "Autopilot" hitting a traffic barrier. Again, it's an obstacle at the left edge of a lane.
This is what happens when you take Level 2 lane-keeping and automatic cruise control and hype it into automatic driving. Other carmakers have offered comparable systems, but with much stronger driver-must-have-hands-on-wheel enforcement. Tesla didn't do that, encouraging drivers to relax and let the computers drive.
Waymo and Volvo take the position that when the automatic system is in control, the manufacturer is responsible. Tesla tries to blame the driver.
(Been there, done that. Ran a DARPA Grand Challenge team 2003-2005. Too old now to work on this.)
Also, what is the car doing at 1:33. It takes a right turn, then just stops in the lane, then proceeds.
This shows why the Tesla approach isn't good enough. The vehicle is on a freeway, an supported environment for the old "Autopilot". Everything is going just fine. Until there's something unexpected the system can't handle, appearing fast enough that the driver couldn't take over in time.
Tesla is somewhere between Level 2 and Level 3. Good enough that the driver is tempted to tune out, not good enough that they can.
If you look across the roof of the Tesla it totally failed to imitate the car in front of it too, that's a double fault, it had more than one indication the lane was changing direction.
Here is an article with a bit more info about that crash:
There's more like these:
I would love to see some work made towards configuring HOV lanes that many interstates have (toll or otherwise) to have not only clear visible markings but in road indicators to facilitate easier autonomous driving in a more controlled environment.
It would have so much safer to Tesla to have trademarked (not sure of proper legal term) Autopilot for later use and call it CoPilot until its safe on its own
Yet here we are, driving forwards up hills.
Waymo want a system that has a zero accident rate, even if that means it can't operate in the rain.
Perfecting safety on human drivers first is the way to go and will most of the risk of lettings computers drive.
Humans are still better at interpreting many things and probably dealing with weather and construction and when there is no data.
That sounds speculative.
"Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year."
Volvo CEO said already in 2015 that Volvo will accept full liability whenever one of its cars is in autonomous mode.
Why does this argument hold true only for self-driving cars, and not current cars ? As such, self-driving cars are just a point in continuous evolution. Current cars are already quite different from what they were a few decades ago. Why is now a good time to draw a line ? Every change in this eveolution goes through government approval. If it is approved, it should be safe enough for everyone.
When you cede control of the steering, braking, acceleration, and even watching the road to software, you are no longer driving.
If I buy a car and want to take the engine out, or a light and resell it - I would be very stupid, but, it is my property and I should be able to.
If Tesla want to lend instead of sell you something - fine, just be honest!
It's a lot like our allowing people to sell patent medicines as "supplements". In both cases, we let people distort the meaning of regular words for commercial gain.
Now, I'm not saying we get rid of all laws. End of copyright will not allow me to put a maliciously crafted copy of Microsoft Windows or Google chrome online in order to deceive users into thinking they are genuine. I still may not misrepresent when I copy things. In fact, I think these laws need strengthening which we can if we ditch copyright.
They're also restricting their customers' actions in meatspace (you can't drive people around in exchange for money) while the Office restrictions (you can't create business-related files) are IP-related.
A more accurate analogy would be if Microsoft banned you from using a Surface for business purposes (or for business purposes while running Windows, but it's impossible to install a different OS).
IANAL so I don't know if Tesla is on solid legal ground (I would guess that probably they are), but I think there's certainly a clear philosophical distinction.
Assuming Tesla eventually "catches" someone using a Tesla for ride sharing profit outside of their network they could remotely disable the car, at which point they will surely be sued and the courts will decide.
And they will very likely decide against Tesla, IMO. The immediate reaction to this statement is bound to cause people who work purely in the software realm to protest, but to most courts software is still a very nebulous thing. Disabling someone's car (after selling it to them outright) because they did something with it that you didn't like is EASY to understand, anathema to the entire history of car ownership and I really don't think it'll fly.
* It's all really just off-the-shelf cameras, sensors, processors etc sourced from various suppliers. Of course we should acknowledge that the validation and alignment of these parts plays an important role, but there's not really any secret sauce there.
What I am saying is that Apple sells you a product. You own it. You can do whatever you want with it. Feel free to alter any part of it, just don't expect it to work or keep your warrantee. The fact the product has limitations out of the box is a different matter. (Kind of like no one complains that kitchen timer can't boil water, or doesn't have nanoseconds precision — you want those things, you buy a different thing or make your own.)
What Apple doesn't do is to restrict you in how you use their product. There's no such thing like "you can only non-commercially photograph your family" or "never resell your iPhone", or "detaching screen is illegal".
And, more importantly, there is nothing Apple can do to stop you from doing any of those things.
You own your iPhone and you rent your Tesla Autopilot it seems.
Vehicle insurance like today can't be used because incentives aren't aligned. Fully self-driving cars will come with an insurance package, and the company will need to control its insurable risk, or there will be no self-driving cars at all. If it's against the law, the law will change, not the model.
I'm sorry, but society will demand it. You will not and cannot have full ownership control over a self-driving car.
- investing in such a fleet would be like investing in any other kind of business: it's easier if you have the money, but if not you can go to any bank and get a loan.
- prices for a trip with a self-driving taxi will drop immensely, this is probably going to be a big liability if you would start the aforementioned business
- other effects that I can't think of right now
It is true that in a lot of places rich people have an edge on the less-wealthy (it's easier getting/staying rich if you are rich), but that's more of a generic problem than something that would be amplified by change to self-driving cars.
The "you don't actually own the hardware you paid for" model is slowly becoming untenable for hardware sold to individuals.
Autopilot is a service.
We just released the latest version of Autopilot. You can now experience Enhanced Autopilot features including Traffic-Aware Cruise Control, Autosteer, Auto Lane Change, Parallel + Perpendicular Autopark, and Summon. Automatic Emergency Braking, Forward + Side Collision Warning, and more advanced safety features are also active and standard.
All Tesla vehicles have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. And Tesla vehicles continue to improve with over-the-air software updates, introducing new features and improving existing functionality to make your vehicle safer and more capable over time.
To add to it there are techniques like applying shadow to a bright image.
When we combine some of these techniques, the task does become possible, even in in adverse conditions.
Also, various techniques often complement each other. For e.g. We may derive a steering angle out of a Machine learning system, and we can cross check that against the lane detection, vehicle detection (no collision) etc.
To think of it, we when we drive ourselves, we just use the eyes. So there is a view that cameras alone are sufficient. But definitely as Radar and Lidar complement the cameras and make the driving system more robust. As Lidar gets cheaper, we are bound to see it being used more even in camera only approaches.
As i understand it, the newer ones use a machine learning based approach and different hardware than the old ones. Does this mean tesla is actively working in parallel to develop two completely different self driving car softwares, one built on ML and one around a more traditional heuristic approach?
Scary stuff, really. No consumer wants (or needs) autopilot "features" - at least not from a marketing standpoint. If you ask me, the car either drives itself or you are driving it.
It's a bit ridiculous to expect people to use these things safely. If anyone has to think about when the car is or isn't in control, I'm sorry, but you've already lost in my book. Humans just aren't that good at switching context or remembering what a product does (or what this particular version of a product does).
There are moments when you have to think about what state the car is in. But in opinion it is by far offset by how much less tiring it is to drive a long way.
I don't know, that sounds like lots of human drivers I've encountered!
I also wish that left-most and right-most lanes were special-cased, with a bit of preference towards left and right side respectively, instead of sticking to the center of the lane no matter what. Just seems safer overall, and with fairly large number of US roads not exceeding 2 lanes in one direction, seems like a pretty good rule of thumb.
In the UK you use the outside lane if you are a slower vehicle and use the inside lane(s) as overtaking lanes, with the inside for the fastest speeds (within the limit) . If you are using one of the inside lane and are travelling slower that traffic in the outside lane you can get stopped by the police and fined, and will certainly feel the ire of of other drivers.
As you can imagine, anybody intending to use the left-most lane for high-speed driving is now forced to use the brakes, execute a [rarely safe] lane merge to the right, pass the slower vehicle on the right, and then aggressively execute a merge back to the left.
Roughly 100 people die in car accidents on a daily basis https://en.m.wikipedia.org/wiki/List_of_motor_vehicle_deaths... For further context, the World Trade Center attack on September 11, 2001 resulted in 3,000 deaths and
has led to a brief but complete shutdown of air traffic, complete overhaul of homeland security, passage of PATRIOT Act, at least one major war and countless "military operations" that don't count as wars.
That's incredibly selfish, not least because it's incredibly dangerous for the driver that has to weave into the middle lane and back again.
Seems bizarre. Then again, you Yanks also don't have kettles, so nothing surprises me ;)
Mind blown. What on earth do they boil water with?
Depending on state there are laws about using left lanes only for passing, which implies you must be travelling faster than the lane to your right. Those laws are rarely enforced though.
Is it just the state I live in (WA) or is the rule not as solid as I thought?
The lane-holding feature (autosteer) does expect you to keep slight pressure on the wheel. It has, that I can discern, four modes ...
freeway following (another vehicle in front)
Only in the first case, you have five minutes before it starts pestering you to hold the wheel. In the other cases it's one minute.
Lane holding and adaptive cruise control in freeway following mode is a WONDERFUL feature. It drastically reduces the driver's workload in heavy traffic. And, it has speed control features that contribute to the melting of compression-wave traffic jams.
Lane holding can be a little squirrely when the road surface is dirty. I think the car sometimes mistakes tire marks for lane markings. There are parts of my regular route where I just take over from lane holding.
Snow or heavy rain? the lane holding feature goes first, then the adaptive cruise control. Snow, rain, or frost on the windshield interferes with the forward looking camera. Snow or ice on the nose interferes with the radar. Ice on the bumpers interferes with the near-field sonar proximity sensors.
Autonomous driving on the Stanford campus and up Sand Hill Road is one thing. Autonomous driving in a nor'easter snowstorm on winding Atlantic Coast roads is another. We shall see.
I admire and support people who tackle hard problems fearlessly. Tesla's doing that.
It's like saying we shouldn't have released seatbelts because people moght not have used them. Or maybe we shouldn't have released vaccines because you still need booster shots.
Safety features, even incremental ones, make the world safer for everyone.
the data already demonstrates that autopilot reduces collisions and fatalities. to argue that it doesn't put you in league with climate change deniers.
This needs empirical support. All of us are human, and are susceptible to false senses of security. Most of us are not good at assessing real-world risks and probabilities. If the autopilot software is good enough to drive the car successfully for months on end, drivers will start to trust it. It will be very hard to stay vigilant and aware, with eyes on the road at all times and an alertness level on par with manual driving.
In other words, an otherwise “good” driver can easily be lulled into complacency by a system that’s really good, but not perfect. It’s an uncanny valley that is worth discussing.
the fact that safety features such as seatbelts and airbags save lives is undeniable. they didn't lead to uncanny valleys of complacency for anyone but drivers who chose to abuse those features with wreckless driving. the data for AP already indicates this trend will continue with ML drivers.
However, since the number of features is quite small, I'm sure owners will quickly get used to what the car does and doesn't do. The autopark and summon is surely going to run over far fewer children than human drivers do in those situations. That leaves just adaptive cruise control and lane changing. Well, if you forget to change lanes because you trusted the car, you still won't crash since the front crash detection (and your eyes) should protect you from that. So I don't see any safety issue there.
Numerous times I've seen pilots complain that their employer requires autopilot to be used wherever possible, while they would like to fly the plane by themselves occasionally.
If we polished off the autopilot to also handle take-off and taxi, and installed it in every single plane that exists, I'm pretty sure the entire aviation industry could be completely automated.
Without consulting the literature, it just seems easy to believe that machine-like consistency 99.9% of the time is more important to safety than confusion in the 0.1%.
If the car fails to detect one in a thousand cars in front of you, or one in a thousand corners, I can see that being highly dangerous.
Let's say you change lane 5 times while commuting. That's 50 times per week. Now, are you going to pay attention well enough that you catch it twice a year just before it pulls out right into another car?
Perhaps likening it to having another person driving you would be good. If you were given a perfect chauffeur, except with the knowledge that they'll drive straight through one in a thousand red lights with no warning, are you confident you'll catch them?
Perfectly safe 99.9% and highly dangerous the other 0.1% may be overall more hazardous than human-level safe 100% of the time. Or it may not. I’m not making a claim either way. But it’s not a crazy notion that it may not be.
More data is needed. I’m cool with that data being collected in the real world with real drivers. Gotta crack some eggs...
LIDAR also does work great in the rain - provided you have multiple LIDAR units (e.g. Ford's snow-proof sensor array: https://qz.com/637509/driverless-cars-have-a-new-way-to-navi... ).
What I like about LIDAR is that it will never give you false-negative data regarding object proximity (i.e. it will never tell you an obstruction in the road is not there) but visual-only cameras can be fooled very easily and definitely can give you false-negatives regarding road obstructions.
It seems inherently less safe to rely on a more homogeneous sensor array: conversely it makes sense to use as many different types of sensor as possible to ensure your design isn't susceptible to being brought down thanks to a weakness in your predominant sensor type.
That said, I think it's disingenuous of Tesla to call their system "autopilot" or imply autonomy of any kind when talking about their system. I will call something autopilot when it can drive me from door to door without me touching the wheel, in less than friendly weather conditions. Not drive in a straight line where it never rains.
It's true that a careful, experienced driver will typically recognise a rapidly emerging hazard as much as several tenths of a second faster than a novice, giving them significantly more time and space to react. However, a careful, experienced driver will also anticipate places where there are likely to be hazards and adjust their driving style to compensate.
Does a self-driving car know that there's a park just round the corner and it's half an hour after the local kids came out of school, thus increasing the risk of a child chasing a ball into the road?
Does a self-driving car understand that the group of people standing quite near the road up ahead are outside a bar at 11:30pm and thus quite likely to be drunk and suddenly stagger into the road?
Does a self-driving car know about the pothole in the cycle lane that you had to avoid while riding into town yesterday, and anticipate that anyone riding in that cycle lane today may move out into the main traffic lane without warning to go around it?
Does a self-driving car know that the news last night reported on a local black spot for "accidents" caused by people wanting to make fake insurance claims, and decide to take another route that is a little slower but avoids that black spot?
Better sensors, fast data processing, and the ability to monitor all sensors all the time are big advantages, for sure, but these things mostly support reactive behaviour. I've seen nothing so far to suggest that the better reactions currently outweigh proactively avoiding or mitigating these kinds of hazards in the first place. Obviously that might change in any number of ways in the future, but we seem to be a long, long way from that point yet.
On your second point, the important thing here is that you don't need to disperse much of this information to humans. Humans automatically recognise situations based on all of their experience, not just their driving experience. Of course sometimes external information sources like the news might be helpful, but much of it is just down to understanding context. See fresh horse crap on a country road? Someone's probably riding horses nearby. Horses scare easily. So, slow down and try to avoid anything noisy that could startle the animals. How many of today's self-driving algorithms take into account this kind of implied knowledge?
If you seriously want to play the inane game of "well if a human doesn't have it then a Tesla doesn't need it" then let's play that game and talk about the things humans have that the Tesla lacks.
What's interesting about human's vision system is that the human eyeball is, relatively speaking, poor. We have digital cameras far better than that already. It is what the human brain that does with that raw data which makes us, as a species, thrive. Most of what we believe we "see" we never actually see, our brain fills in the gaps dynamically and infers information over time.
So this human processing ability, much of it automated rather than conscious, is totally relevant if you want to have this "Tesla Vs. human" debate. It is also why Lidar might be needed to make up the massive shortfall in a Tesla's processing ability relative to the human brain.
But hey, you want to keep to the "but HUMANS don't need it" then I ask where is my 38 petaflops and 1 TB of memory...
Teslas have multiple radars for judging distance and multiple cameras that are used for stereo disparity. Also human 38 teraflops is not the same as nvidia teraflops.
I am not saying teslas are better than humans, I'm just saying teslas can drive on I5 highway from Vancouver to Mexico better than I can.
Also Lidars are really really expensive, I applaud Tesla and commaai for breaking major ground just with cameras. Convolutional neural nets have being doing phenomenal things in the past few years.
You can find other figures, but many are in the petaflop range, well above what could be realistically installed in a vehicle.
I don't think we know enough about how the human brain works yet to give a precise value, but just on caloric arguments I would say that the mean processing power of the brain is not significantly above what we have now in general purpose computing devices.
A in-dept report on Teslas' code and system qualification framework would be a very interesting read, I'm sure.
In the Model S configuration page, "enhanced autopilot" and "full self-driving capability" are still optional extras that together cost $8,000, so I'm not sure what they're talking about here.
Apparently, these features can also be unlocked at a future time, for a higher fee of $10,000.
It's also equivalent to those high-end network switches with 48 ports that have soft-locks to disable ports on the switch until you fork-out $lots to unlock them: it's an artificial limitation.