What he did is impressive. But the results are not that outlandish for a talented person.
1) Hook up a computer to the CAN-Bus network of the car  and attach a bunch of sensor peripherals.
2) Drive around for some time and record everything to disk.
3) Implement some of the recent ideas from deep reinforcement learing [2,3]. For training, feed the system with the oberservations from test drives and reward actions that mimick the reactions of actual drivers.
In 2k lines of code he probably does not have a car model that can be used for path planning  (with tire slippage, etc.). So his system will make errors in emergency situations. Especially since the neural net has never experienced most emergencies and could not learn the appropriate reactions.
And guess what, emergency situations are the hard part. Driving on a freeway with visible lane markings is easy. German research projects autonomously drove on the Autobahn since the 80s . Neural networks were used for the task since about the same time .
The parent's checklist misses a bunch of things. For instance "1) Hookup a computer to the CAN-Bus network of the car". That alone is not trivial. It is trivial if you want to read the car's odometer, but good luck doing more than that. For instance, people are still trying to make sense of the reported battery cell voltages in the Nissan Leaf. All interesting features are not documented and require serious reverse-engineering. "Hooking up to the CAN-Bus" can easily become a task for a whole month, full-time. Not to mention that the most useful features for the self-driving part are probably not accessible by the CAN-Bus - people are still trying to unlock the doors of the aforementioned Nissan Leaf. Steering, acceleration and braking are unlikely to be on the CAN-Bus "2) Attach a bunch of peripherals" is also hand wavy, and same goes for the rest of the post.
It would be like dismissing SpaceX accomplishments by saying: "1) Build rocket frame. 2) build engine 3) Program flight software 4) Fill up the tanks with fuel 5) Push a big red button". The devil is in the details.
With that out of the way: if the events happened as described, this guy should be convicted of reckless "driving". Taking a prototype that had only started working a few hours prior to an actual test run in a freeway with other cars is insane. What about some simpler, more useful and less dangerous goal? Such as a lane-departure warning add-on for cars which lack that capability?
The article title is the worst part though. It's not "clever dude created a self-driving car prototype by himself". It is "Dude is taking on Tesla by himself". Which is bullshit.
EDIT: Fix typo.
We all take from the good work of those around us. But how many people seriously do things with that work? Not many, and disdaining people who do so is not productive or, in my view, a good thing.
We've also seen all the news from Google about their efforts and the pain points that they are experiencing. And this guy cobbles some stuff together and just puts it on the road. Most of us are not as smart as this guy, but that's just irresponsible. That just puts a bad taste in people's mouths.
It's not like it's unsupervised. Is it any more dangerous than taking a learner human diver out in a car?
Where's the sudden breakthrough? All of this is built on technology and work that came before it. The whole field. It probably only really started being worked on in earnest from a business context because big tech companies like Google had more money than they knew what to do with, and were willing to spend it on ventures with no likelihood of profit any time soon.
We have to draw the line somewhere.
I'm painfully aware of this. Ten years ago I ran one of the 2005 DARPA Grand Challenge teams. That's about what we produced with less than three full time equivalent people. We didn't have to handle other vehicles, but we did have to handle off-road conditions. Ours didn't make many mistakes, but it was very conservative and kept stopping to rescan its environment with a line-scanning LIDAR
on a tilt head.
I'm scared of happy-case automatic driving implementations. Tesla went down that road and had to back up, removing some features. Cruise's PR indicates they were going that way, but they now realize that won't work.
> Not that outlandish for a talented person
What planet are you living on? I don't know what you did today, but I played with some jquery animations. This guy drove around in a self driving car that he built himself. It doesn't solve for edge cases? Neither do 90% of CRUD apps. Holy shit.
Give some credit where credit is due. This is not an ordinary or average outcome.
My point was that what 99% of HackerNews does is likely nowhere near as interesting or as difficult, so when the top comments are all shitting on someone who did something that's actually pretty amazing, HackerNews can go to hell. I mean that from the bottom of my heart. I'm done here.
Nevermind the ridiculous amount of engineering that was required to build all the tools he's using and the order of magnitude more engineering required to make this a safe, mass producible product.
But nah, let's just praise the founder, allow him to get rich while we all do the dirty work.
Claims of commercial viability or beating Tesla are a bit ridiculous, but this is pretty damn amazing.
> “I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”
> “ ‘If’ statements kill.”
> “I want power. Not power over people, but power over nature and the destiny of technology. I just want to know how it all works.”
2. Impress investors
3. Hire others to finish the job
Tesla customers invest in Musk. Musk invests in Hotz. Hots invests in developers. Developers in researchers. We're all delegating until we find someone who can finish the job. We're investing in people to hire the right people.
It doesn't work this way. You just move up the value chain.
Your comment screams a superiority complex, but I bet that you are actually a nice person in real-life. Hotz is doing good work, and everyone in the technical field is relying on work done decades before they were born.
You mean like this one, from Defcon in 2013?
Or The Defcon 19 How To CanBus Hack Workshop, that taught classes about this in 2011?
Unfortunately for him, defcon prefers original content, not someone claiming credit for what has been demonstrated repeatedly in previous years.
It's an impressive personal project, no doubt about that. It's however also important to recognize the difficulty of having a system that works in mass production and handles all kinds of situation. Like someone said earlier it's easy to have the car drive in clear day with very visible markers. The hard part is when it rains, when it's foggy, when things are less optimal, etc.
Once he get to that point he'll find that part to be a lot harder than what he's accomplished so far.
>> "If they had engineered a self-driving car, they would've engineered a self driving car".
You are just being absurdly accusative in your comment. That guy has done a self-driving car with nothing but tools available in the market.
Yeah, instead of going after the sensationalism, how about you discourage him from endangering the lives of others who weren't given the opportunity to make such a stupid decision?
This article is a hero worship piece about a guy rather than a story about the technology. It's like how you can't find an article about Theranos that isn't actually just a photo-shoot/celebrity worship article about its founder.
In the video, he claims to want to achieve level 3 driving. Let's see how he can do the following under non-perfect conditions:
- switching lanes
- stopping at lights
- turning cornerns
- turning left corners in traffic
Then we can move on to the more difficult situations.
Have all stop lights have a beacon that will tell cars their state, "the light is green on north south, red on east west"
Have exit signs have beacons as well.
Have you seen a traffic light? They're pretty substantial. How long would it take for you to make one in a hackerspace?
Contrast this with hacking OTA updates for traffic beacons. You might not even have to change any atoms around to do your dirty deed. You might not even have to be there physically.
The real issue here is that self-driving cars are probably the wrong place for that to happen in AI. At best, a solo project creates a crappy prototype where there was no product before (again, see Woz/Apple). The expectation for driverless cars is too high – they need to be 100% good, because your life is on the line, not 80% good.
What's the AI project that would blow people away, even if it was a shadow of a working prototype? I think that's the real question.
Even if AI cars are statistically better than humans on average, it's an issue of control. It's true that most accidents are avoidable and caused by human error, but most people are (perhaps overly) confident in their own ability to drive safely (this is also why people text and drive).
Robot that can build a better robot?
Finding a safe place to test an autonomous vehicle on a budget is hard, but not impossible. Our initial testing in 2004 was in a large unused Sun parking lot in Fremont. (Sun got carried away with expansion plans, and started building a big facility there. They paved the parking lots and poured the building foundations, then stopped construction.) Later off-road testing was at the Woodside Horse Park. We also looked into testing at the Hollister off-road vehicle park, and discovered we could book a sizable area on a weekday for our exclusive use. We never used that, though. We'd also looked into using the old FMC tank test track in San Jose, but never found a good contact there.
In other words, I don't care if this guy is painted as a genius or script kiddie. He's not relevant in my life, and I will forget about him a week later. However, the lessons about machine learning and engineering that I can find in thsi article is the reasons I subscribe to HN (yes, I don't really know shit about these topics, and don't have enough time to fill gaps with real sources), and this comment is the most informative, just because he tries to cover what the article didn't.
This is just the start, not the end.
Human drivers might only see one of these cases a month, or 6 months, but not driving over someone in that case is what is critical. Not saying it's an impossible task, but IMO it will require a lot more training data than humanly possible for one person to generate.
Even pseudo-faking like we were trying to do, wherein as generated signal is injected into actual, recorded background noise, is fraught with problems. Anybody who tries to develop a control system based solely on such data is in for a rude awakening when they try it for real for the first time.
History keeps repeating
General cynicism isn't really adding much to the conversation in my opinion, since almost everyone here probably knows this already, and too much cynicism can put people off starting projects and people starting projects is something we should cherish.
I do think your point about emergency situations is substantive though. Perhaps he is only planning for self-driving while supervised by humans, but his idea for training as described (become an uber driver) would not at all produce the kind of dataset that would assure me that I would be safe. I think a lot of training with advanced drivers in simulators where you can have crazy life threatening situations would be the absolute minimum. I'd be worried that bad habits picked up on the thousands of uber rides would kick in during an emergency rather than the couple of situations that would be feasible to train on in real life.
What's better about an AI powered by neural nets is that you could train an AI to go offroading.
Get enough data and you've got a model for dealing with a given situation. Google's biggest strides with OCR, Voice recognition, Spam filters and other AI tech early on came from its ability to gather a huge corpus of data.
The real challenge is two fold. Gathering data, and feeding the AI with inputs with data actually matters. This is the secret sauce that Hotz refers to in the article as the information he's not willing to disclose. That information will become commoditized in due time (like low-latency optimization for HFT), but it will take plenty of institutional money & experience (Google, Apple, Tesla, Ford, etc) to get it there.
It's fairly easy to train, and verify a system for driving in well-behaved traffic. Unfortunately, the problem space of not-well-behaved traffic is far wider - and is very hard to gather enough data to train a system well.
What you're going to get is self-driving cars which handle 99% of driving just fine - and when they end up in emergencies, find the human 'driver' to have dozed off at the wheel. (All-in-all, their safety record might end up better then the status quo - but that's not a certainty.)
The reactions of NN to unusual stimuli are likely to be counter-intuitive at best and unpredictable at worst.
Machines, on the other hand, are a different story.
It seems that the technology has already, or is very close to approaching human levels of proficiency on the road. If specific use cases (offroad, snowpacked road etc.) are problematic, they can be limited or prohibited in the mean time.
I have no inside knowledge, but I would be very surprised if Google's self-driving cars use neural nets as a significant component right now (which isn't to say there aren't people exploring its use).
I could probably do this using ROS, opencv, and pcl. At least on a level where the car could recognize the road well enough to drive on it, but I imagine both my car and his car are nothing that any sane human would want to sit in. That last 20% focusing on safety and edge cases is going to be 100x the work/innovation/testing/staff/code/talent/smarts here.
As a side note, I am intrigued by the idea of a FOSS self-driving car. Its a little worrysome we'll never see the code Tesla, Mercedes, etc are using.
So yes, it's an amazing project for a single person, but it's not really a self-driving car.
Could you explain on what basis you claim this? Do you have intimate knowledge of his prototype, the amount of work he put in, or the novel ideas that he brought to the project in addition to integrating pre-existing tech?
From what I can understand, your argument seems to be "lets see if I can guess what he did". If you're an authority in this field, then your guess could be very accurate I suppose.
OK, maybe it's BS, but he's not saying what you say he's saying.
I would be very surprised if you got deep reinforcement learning to perform well on a self-driving task, even on a highway. If you did, well, your faculty position at Stanford is waiting for you.
more states? add neurons! more search space? add layers!
except they have unpredictable resonances, especially multi-layer networks:
they're just starting to understand this, but I believe the myth of the 'do it all dnn' is gonna die. it's time to start thinking about cluster of independent neural network, supervising each an independent aspect of the search space and/or each other.
Small personal example: my family lives out in the suburbs. My dad works in a neighboring city. His commute is about a half hour, 20 minutes of which is a straight shot on a major highway. I'm sure he'd be willing to pay a few hundred to reclaim that 20 minutes each way to read a book/the newspaper, check his email, browse the internet, etc.
It'd also be good for road trips.
Groves "Principles of GNSS, Inertial, and Multisensor Navigation Systems" contains a good description of the various technologies used and their accuracy limits for navigation.
Consumer grade MEMS are fine for airbags, the pedometer in your phone, but are not sufficient for intertial navigation, even when aided with other sources. At around the $2K-30K you get systems that can provide accurate navigation for up to 2 minutes or so. They are used in things like missiles.
Aviation grade IMUs need to meet SNU 84 standard, which requires a maximum horizontal position drift of 1.5km in the first hour of operation. These will run 100K and up. Marine grade (subs, rockets, ICBMs) run $1 million and up, and have a maximum drift of 1.8km per day.
None of them are good enough for autonomous cars w/o sensor fusion.
I am totally for a computer telling me when I'm driving dangerously because I'm distracted or tired.
Take the case of the Wright Brothers who faced two well funded adversaries. Samuel Pierpont Langley had a chair at Harvard, worked at the Smithsonian and had among others funding him $50K from the US War Department. Alexander Graham Bell, the inventor of the telephone, was an avid aviation enthusiast and an already wealthy man. One of Bell's assistants was Glenn Curtiss who went on to found his own plane company.
Who would bet on two bicycle mechanics from Dayton, Ohio? No one, yet they were the first to fly.
The first popular microcomputer would surely come from IBM or HP yet it didn't. Two guys in a Cupertino garage built it and neither of them was a college graduate.
This guy may fail but I am not going to bet against him. In fact I hope they televise the race between the Comma and the Tesla. I'll bring the popcorn.
I.e. they did much more than simply throw some ideas and parts together and see what stuck, like every other contemporary experimenter.
Ever since I played around with Prolog in the nineties I came to believe, just like digital eventually triumphed over analog, that neural networks will eventually triumph over rule sets based software. I did not know when it will become apparent, but I firmly believe that it is coming.
Learning for the AI does not have to be from real world experiences only. Simulated/controlled emergency situations would help as well! Further even if the 2K lines of code stretches a bit more to deal with unknown situations that isn't so bad either
Here has been dedicated on making maps at a quality level that is needed for automous cars. A few issues with the data that is available currently is that the data isn't very detailed, and you're at the mercy of volunteers (tiger(old), openmaps data), or for a company that's main focus isn't maps. (Google)
What if you could simulate these conditions in a safe/controlled environment, and remove the driver from harm via remote control? Maybe build a virtual world that simulates the inputs as best as possible. That would be the cheap way, although you may lose fidelity.
If you had enough money you could build a simulated town/city, similar to a movie set, that throws all possible dangerous scenarios at you and operate the car remotely through these scenarios.
I think it's likely that much more of self-driving car development is smoke and mirrors than people realize. Best case scenarios are promoted as examples of how innovative a company is. Great PR, not necessarily a practical result.
This is a sensor limitation. They have fully admitted this several times (heavy rain too). Equating the fact that this guy can't handle any emergency situations with a sensor limitation all of the lidar systems suffer from is stupid.
Google has shown many times that they have logic to handle routes around obstructions, construction, etc as well as cars running red lights, pedestrians walking into the street, etc. At least read up on something before you call it smoke and mirrors.
I'm not saying it won't ever happen, I'm not saying there haven't been developments in the technology. But people seem to have a disconnect in expectations of where the technology is, and where marketing departments for these tech companies want you to believe the technology is.
If SSD = Solid state disk, then HDD = Hard disk disk.
HDD = HARD DISK drive
SSD = SOLID STATE drive
The biggest thing here IMO is this is self-funded. Any startup trying to do what he is doing in this environment would have raised $50 Million, hired 100's of engineers from top notch schools, become accepted in YC, and have Marc Andreessen, Paul Graham, Sam Altman and all singing their praises.
Kudos to him for being self-funded.
Seems they recently raised $15m http://techcrunch.com/2015/09/18/cruise-2/
Wonder how they compare tech wise to Geohot's thing
I'd prefer my autonomous cars to have gone through insane amounts of testing, regulation, etc. This is just too new of a field, and the amount of edge cases you have to handle is practically infinite.
Investors know that their returns are generated by a handful of super-successful companies. And so they have a natural pressure to "swing for the fences".
Founders have a tremendous amount tied up in THIS company, and are naturally risk-adverse.
So you get conflicts like the following. There is an initiative which has 20% chance of losing everything, but could double how much you make. Investors will always want to go for it. Founders reasonably may not.
A million times this. I never really understood how hard it was to explain a (in my mind) simple new technology to the lay person until I had to do it. This is even after spending years as a technical briefer for high power executives.
As for money, yes, it can accelerate growth in its first-order effect; but it also induces stress and so threatens early exhaustion of your other precious resource: personal motivation.
So, as a crack-shot programmer, if you know with 90% certainty you can crank out a self-driving car in 6 months by yourself or fail, but only 20% certainty you can arrange a cohesive team with someone else's money to crank out a car in 1 month or fail (and alienate your team, and ruin your credit)... I would advise taking the 6 months route. Patience is a virtue and sometimes it's better not buying into every pot of snake-oil the SV hype machine wants to sell us.
The reason we don't have an insurrection on our hands now about wealth disparity is that while the wealth of the super wealthy has accelerated hugely so has the general living standard of the poor, if (when) the jobs go away that will no longer be the case and then you are talking about a brutal escalation into a full insurrection and while the technology and wealth will be on one side, the last 15 years in the middle east has shown what committed people with pickups and AK's can do against an on-paper massively superior opponent.
I just hope the super wealthy are smart enough to see this coming and avoid it, it would be spectacularly brutal.
It's a nice dream, but the idea of AI and robots doing dishes, picking strawberries, washing cars, cooking meals will never happen.
The best AI cannot beat a population of Mexicans who are basically the glue that holds out modern society together.
If you wanted to see how the U.S. Will completely come to a screeching halt, it would be if the rapture took place and only claimed all Mexicans.
Our entire way of life depends on them. AI will never replace them.
> It's a nice dream, but the idea of AI and robots doing dishes, picking strawberries, washing cars, cooking meals will never happen.
If something can be automated at a lower cost than paying wages it eventually will be, automation is coming (arguably has been here since the industrial revolution) and it's not stopped yet.
Watch this - and tell me whats cheaper, robots or mexican slaves.
"Jobs" are not an end in themselves, and are decreasingly relevant in the information age.
Like Palmer Luckey of Oculus VR, I hope G Hotz has a similar story to tell at the end of it all.
Yep, he's still in his twenties.
Naivety is a very good thing at times.
I've seen average people achieve incredible things, and not because what they did was incredible... but just because they started work on things that no-one else thought they could complete. Some way into it, when enough progress has been made, people have rushed to give support because "halfway there but badly done" is a hell of a lot better than "not even started yet".
But also now that I am in my 30s, and they are as well, we frequently look back at that time and laugh about being that young. "Man, you were fun to work with, but also what were we thinking"
So I definitely wish Hotz all the luck. If nothing else, the more smart people working on the problem of self driving cars, the better.
My comment mostly stemmed from amusement of his quotes.
Part of this was hubris. The thought of someone I considered less capable than myself accomplishing something I felt I could not damaged my ego. This was humbling.
Part of this was experience. The experience to know that attempting the hard or impossible is sometimes worth the effort, whether you succeed or not. This was educational.
Part of this was ambition. Ambition to do something new, to ignore the naysayers and noways when needed, and forge your own path, which I've always felt short on, but have steadily worked on over time. This is ongoing.
I do remember being about 19 and thinking I was the best programmer in the world. By about 22 I had rewritten as much of my old code as I possibly could because it was so horrible. Somewhere between there and now I've gotten a cynical bit of humility to tamper my ego. I think the cynical part is that my ambition has not lessened, just my belief that I can succeed.
One Steve Jobs philosophy is focus and say no. I'm guessing I could do better if I said no to all but a single project.
"There's nothing like succeeding at something you weren't even qualified to attempt."
Thanks for sharing.
People building untested self driving cars is an entirely legitimate concern.
"Nationally, 963,000 teen drivers were involved in police-reported motor vehicle crashes in 2013, which resulted in 383,000 injuries and 2,865 deaths"
I'd worry about that more than the odd geek with a laptop.
“I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”
Thats what I think 'foolish' means in "Stay hungry, stay foolish."
He might not be naive.
In my opinion, we must never underestimate people.
>>but just because they started work on things that no-one else thought they could complete.
Nothing fails like smartness. The reason why a few people achieve the impossible while far more intelligent and smart people don't is because of the curse of intelligence makes them believe certain things are impossible.
The fool didn't know it was impossible, so he did it.
I also realize this kid probably won't end up making a huge dent in the universe.... but.... statistically speaking, there should be several "Leonardo da Vinci"- level humans alive right now. Why not this kid?
Impressive as they are, his chops still don't support his claim to "know everything there is to know". The Dunning-Kruger effect is in full swing.
I kind of took that to be like how Musk talks about needing to know first principles. In the article you can see that he was humble about what he thought he knew, took jobs here and there and eventually confirmed that he was at the cutting edge, that he knew 'everything there is to know' about this special area.
That's when he realized that he was qualified to try this. IMO, anyway ;)
I sold my first company and the investors did very well, but I made tons of stupid mistakes in the process. Not least of which was holding on to dotcom stock that I thought would go to the moon but which mostly went down the drain.
(I'm very familiar with this literature - see my username)
I assume you're talking about using backpropagation with gradient descent. Backpropagation itself isn't all that interesting. The interesting part is that it works for practical problems and doesn't get stuck in shallow local minima.
Someone who has digested enough of the AI literature to think about the methods in aggregate is very likely to be in a position to see any particular method as a "simple" implementation of some more general set of principles.
But the particular quote is referring to learning rates in autonomous robotics, especially visual classification in complex real-world scenes.
I have worked and published in ML since the early 1990s, was a program chair for the learning track at NIPS one year, participated in the same DARPA learning-to-drive program that Yann LeCun did, and don't consider the math behind "state-of-the-art papers" to be simple.
Just taking deep learning: there are a lot of tricks and recipes (e.g., rectified-linear activations, number of layers, staged training) that are not mathematically understood. It's exciting, but mathematically still a jungle. Just because a neophyte can code and optimize a network does not mean that the math that explains why it actually works is simple. As engineers, we need to understand why it works before using it in a safety-critical situation.
Now, to understand WHY the algorithms work, and gives you the result it claims to calculate, is quite hard, but that understanding is not required to implement those algorithms.
If only he knew about the Dunning Kruger effect...
(probably one of the more Buddhist-ish gems from Western philosophy)
> A person in their 20's knows nothing, but thinks they've outsmarted the world...
Is a dangerous and gross generalization. I totally agree with the changing of perspectives point, but feel that this community has a very clear bias from the older gen (30s and up) against the younger gen (teens and twenties). That's all I'm saying. It's divisive. Instead of saying they "know nothing", it should be phrased, "still have a lot to learn."
> This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment
I think the point you are making is generally valid... But he is a savant. I don't think it is wise to apply generalities to him.
Yes, age will change some of his sharper edges, but he is already pretty unusual.
I don't think this has anything to do with saying he knows everything he needs to know in the world.
Anyways, wouldn't you agree that it is better to be empathetic rather than thinking you're an idiot?
To be honest, I recommend faking to yourself that you're in your 20s still :) Much healthier attitude.
Self-driving cars (in some form or the other, under some loose definition of "self" and "driving") have been around since the 20s. But it still remains a vexing problem.
It is quite easy to program a car to stay between 2 cars and follow the car in front. It is quite another to have the same car drive on (a) a road without lane markings; (b) in adverse weather conditions (snow, anybody? Hotz should take the car to Tahoe); (c) in traffic anomalies (ambulance/cop approaching from behind; accident/debris in front; etc. etc.); and so on.
No offense to GeoHot, but I'd love to see his system work in rush-hour 101 traffic; or cross the Bay Bridge, where (coming to SF) the lanes merge arbitrarily.
The key challenges are not only to drive when there's traffic; but to also drive when there's NO traffic, because lane markings, etc. are practically nonexistent in many places.
Having said all that, I still admire his enthusiasm and drive(no pun intended). Tinker on!
Mobileye is doing something interesting by curating the reliable parts of the dataset (e.g. they have curated databases of traffic signs for each region) -- again not something you could do own your own, and seemingly archaic (hence GeoHot's criticism), but if you can afford it can speed up the training significantly.
Tesla is a massive resource here because they already have a huge fleet of internet connected cars proving enough data to fill the aforementioned training set in a matter of days or months: let's estimate their fleet at 40,000 cars -- then they could fill that minimum dataset in less than a day, and in a month they might have a 100x safety margin. Of course, there's a big technical problem of relaying all that video (maybe they just relay prediction failures), but the data is there.
Another fundamental problem with exclusively hands-off training (and little optimal control theory, etc) is picking up bad habits from drivers -- even the best algorithms will have a hard time and be only about as good as a good driver in each scenario, in the best case -- since the training data is acting as a ground truth.
The problem is: there are new edgecases born every day.
Consider, for example, an accident where the cops have set up flares. How often do you come across one of those? Very rarely, I imagine. And even if you did come across it in your training set: how does the ML know that you are following the cops' signals, and not just randomly switching lanes? That the flares are a critical signal?
Ultimately as long as the cars driving autonomously is small enough and procedures change slowly enough you should be able to continously update the driving system.
But let me reinforce that a pure learning approach even with very large datasets may not be efficient as one would like -- the curation of signs is a good idea, and manually reviewing accidents and near misses (a highly human-intensive task) and perhaps flagging bad driving behavior (probably after some outlier screening, which can be good or bad) will be important to get it really good with the training-intensive approach (and not the top down optimal path planning and control approach).
EDIT: Mobileye CEO discusses some interesting design issues and manual validation (and shows they have lots of data, good sign) https://www.youtube.com/watch?v=kp3ik5f3-2c&feature=youtu.be...
It depends on what sensors are in use and how the environment affects them. I can't get into much detail unfortunately, but I have seen radar systems that use naive Bayes classifiers for target detection and classification. Those systems required large numbers of examples across a large, multi-dimensional space to work effectively. Target detection and identification is a trivial task compared to what the control system of an autonomous vehicle needs to handle.
who validates all this data?
attaching a dnn to a driver as a training set is a pipe dream, for now. maybe after we understand how our brain perceives time and build models of future outcomes, we could apply it to build better nn. For now, nn are just best used as classifiers in a controlled environment, not from an environment with unpredictable states.
and especially not in an environment with adversaries http://spectrum.ieee.org/cars-that-think/transportation/self...
Sensor failure or well characterized adversarial inputs are actually really easy to deal with -- they are very easy to simulate with a given dataset and self-validate using traditional techniques -- simply make one or more cameras fail (or receive spurious sigs) and verify the output.
It's a good point that probably all autonomous cars will need a contingency plan (probably human intervention and/or blind emergency stops) with non-zero probability -- even if you have a redundant network of cameras around your vehicle a critical number can and will occasionally fail (when you look at the fleet sizes that will be dealt with).