Hacker News new | past | comments | ask | show | jobs | submit login
Udacity plans to build its own open-source self-driving car (techcrunch.com)
213 points by perseusprime11 on Sept 17, 2016 | hide | past | favorite | 94 comments



Given how important self-driving vehicles are, on so many fronts, I think it's fantastic that a company with resources is pushing forward an OSS implementation. Safety alone is reason enough for there to be a good OSS reference implementation. I'm surprised/disappointed that government hasn't been more instrumental in pushing forward an agenda of promoting self-driving cars (while expending resources on other types of work that will be far less effective long-term).

So, this is super cool. I'm surprised it's Udacity doing it; on a couple of fronts:

It seems, on the surface, outside of their core competency...but thinking about it, it does make some sense, if they really have enough paying students (and maybe sponsoring organizations) to make it work. I mean, schools that teach auto repair don't work on fake cars. Why would a school teaching self-driving limit themselves to simulated cars?

But, it's also surprising that Udacity has the funds to make it work. The cost of building self-driving car technology must have come way down just in the past few years. And/or Udacity must be making a lot more money than I would have guessed based on how crowded their market is.

Regardless, it's super cool!


> I'm surprised/disappointed that government hasn't been more instrumental in pushing forward an agenda of promoting self-driving cars

The DARPA Grand Challenge in 2005 and Urban Challenge in 2007 are what started it all.

Now that Google, Uber and others are racing to commercialize, all the government has to do is not overregulate. No need to "push forward an agenda".

(FWIW DARPA, then known as ARPA, did the same thing the internet a few decades ago.)


There's a rule-of-thumb that academic research takes 10–15 years to be commercialized, so we're right on schedule now 10 years after DARPA's 2005 and 2007 Challenges. :)


I can't wait for 10 years from now when I have a walking robot slave then from the recent grand challenges


My university had participated in those challenges. They were pretty cool. But yes, once it was proven that something was possible it open the flood gates for commercialization.

I'm more surprised/disappointed that someone on HN wasn't aware of those initiatives. It shows why people are for defunding government research. Though I imagine DARPA being part of the military, wouldn't have been in that boat, but my point remains the same to NASA, NSF, NOAA etc.


"I'm more surprised/disappointed that someone on HN wasn't aware of those initiatives."

You've made an assumption that I'm unaware of them, which is incorrect.

I just think they're very small in the grand scheme of things. The DARPA challenges are a blip on the radar of the federal budget, as it relates to highways, auto safety, fuel efficiency, and a variety of other areas where self-driving cars will have a tremendous impact.


This is false. People were thinking about and researching autonomous driving seriously since the 80s: https://en.wikipedia.org/wiki/History_of_autonomous_cars.


This is not surprising. Sebastian Thrun[1], one of the founders of Udacity, specializes in robotic automobiles.

[1] https://en.wikipedia.org/wiki/Sebastian_Thrun


Yes, he won the 2005 DARPA Grand Challenge


And they're an education startup. Udacity has a leg up in that they can make their own robotic car engineers.


Thank you, and great thoughts! I think we'd surprise you on our growth, but costs have also decreased a fair amount in the past 4-5 years.


It is way cheaper. The cars can be leased or rented and are ready to be used with electronic control steering, throttle and braking. The hardware is accessible. The software is fairly straightforward as long as you define a good sandbox.


Hello everyone! Former YC founder here (S11) who now works at Udacity on this team. I'd love to answer any questions on this project or our autonomous vehicles curriculum, and welcome you to our enthusiasts Slack team (http://nd013.udacity.com).


Why do it if "it isn’t the core focus of Udacity’s business" as the founder, Sebastian Thrun, said?

Seems like creating free and open source simulator would be of more value than trying to get students to build hardware based vehicles?

DARPA, YC, and other have taken this approach for various reasons in some projects - and it seems if the intent is to teach, learn, share, etc. - it'd be a better investment.


Great question! My 2 cents: our core mission is to democratize education, and although the car itself won't make or break Udacity, I think it can contribute dramatically to our core mission.

How so? We have students all around the world (almost every country is represented in our DAU) who will never have access to a car outfitted with hardware and sensors (easy $125k), never mind the costs needed to get a permit to get on the road ($50k!). Being able to contribute code and see the results run in real environments (ask Sebastian what he thinks about simulation!) could be a huge advantage to students around the world in their quest to jump into this industry and get credibility.

Speaking of credibility, we want to prove to the world we really know what we're doing, and that our curriculum is truly legit.

tl;dr: We hope that by open sourcing our car that we can give opportunities to students around the world who otherwise wouldn't have it.


>> "ask Sebastian what he thinks about simulation"

Thanks, appreciate you addressing the questions.

If you wanted to ping Sebastian and let him know about the AMA, I'm sure there would be a lot of interest beyond just me getting an answer to my questions from him; I truly am curious to hear his take, since as you say, the barrier to rendering the software in hardware would likely be beyond the reach of an individual student or even likely a group of students in closes proximity to each other.


Let me work on that!


A simulator is indeed more practical. A live application will allow exposure in a way the simulator cannot. Its about theory and practice (and marketing, in this case).

I did simulations before doing anything in practice. Benefits were many because there is nothing like the real thing.

Edit: Removed presumptuous comment about myself


Maybe Thrun doesn't feel like launching separate companies for each of his interests.


Hi Oliver. There are thousands of applications for 250 seats per cohort, and it sounds like one cohort will start per month. Is there a rhyme and reason for how admission to a cohort will work?


Hello! You can check out this FAQ for info: https://www.udacity.com/drive/faq

We're taking a lot of learnings from our Georgia Tech OMS program to find amazing students from around the world.


Hi. Simple question, what's the tech stack that will be used for the opensource software that will run the self-driving? (I imagine static and safe programming language?)


I am currently working full time as a scientific developer with substantial background in statistics and control theory, and I want to attend, but I am not sure about the expected course load. Could you discuss average hours a student is expected to spend on this? And whether this is like a full-time student course load or a part-time course load?


We optimize our curriculum for those who are unable to spend all of their time on education. 10hrs/week is our recommended amount of learning.


Will ethics (e.g., the collection and storage of personal information, or professional responsibilities in regulated industries) be given any treatment?


Hey! Great question. We strongly believe that it's important to understand the ethical implications of such technology. However, we don't believe we are experts or authorities on the ethical implications and as such will provide recommended readings but not cover it ourselves.


Could you go into a little more detail here? Thanks!


Thinking of training data sets as an example: they contain a lot of personal information (faces, license plates, times, locations, speeds, etc.). Depends on jurisdiction, but there likely aren't legal issues in amassing and using this data when it is collected in plain view from public areas; I envision possible ethical issues though. Could you release training data for public use, without obscuring identifying information? There may not be legal issues in releasing raw data, but keeping in mind that the information is sensitive and recorded without consent, and that an engineer's first duty is to the public health, safety, and well-being, might there be a professional obligation to sanitize the data?

My example might be a bit contrived, but I think there are going to be many valid (and far better!) questions in this discipline that should be asked and considered, and I think your graduates need to be equipped to do so.


It isn't just ethics; in Europe, at least, it's regulatory. (See GDPR and it predecessors).

The response from Udacity suggests not.


To put this into perspective, it wasn't so long ago (2004) that nobody managed to pass the DARPA Grand Challenge of having a car drive itself around in the Mojave desert. We've come a long way very quickly!


What was the most important achievement since then? Was it the LIDAR system? Or was it 3d vision? Or some other form of AI?


I'm sure someone more qualified to than me is floating around here. From what I heard at CMU then, the GPS data in '04 was a couple feet off, and everyone was driving a few feet off the road as a result. I imagine the key takeaway then was that one couldn't depend on GPS.


That caused us trouble in the 2005 Grand Challenge. It turns out that Novatel and Garmin GPSs were about a meter apart. They're both applying corrections for atmospheric distortion obtained from ground stations and distributed through a geostationary satellite, which can give 15cm precision. We had Novatel, and DARPA had measured the course with Garmin. I talked to the JPL team, which also had Novatel, and they had a similar error.

We were strictly obeying the course boundaries, and had a terrible time getting through narrow gates where DARPA's waypoint file had a narrow width designed to guide us through the gates. If you look at videos of our runs, you can see the vehicle backing up and trying to get through a narrow obstacle. It's trying to get past a real-world obstacle on one side and a GPS limit on the other, which has narrowed the allowed path to where it can't quite fit.

So for the second run, I put in a patch to add 1 meter to DARPA's lane width. But I forgot to push it out to the vehicle, and we botched the second test run. It was in place for the third test run, though.


DARPA gave you bad data and told you you had to use it? Why would your system insist on staying within DARPA's boundaries in the first place -- did they make "out of bounds" too narrow in a misguided attempt to be helpful?


Yes, they made "out of bounds" too narrow in an attempt to be helpful. The data file provided to each team just before each event was a set of GPS waypoints, each with a width. The width of each segment was the minimum of the width at the endpoints.

In the 2004 Grand Challenge, the bounds were much wider and most vehicles screwed up. As it turned out, you could almost drive the 2005 Grand Challenge by staying centered in the DARPA-defined path. But we didn't know that in advance.


Do you think that self-driving cars get clean correct data as input?


Don't Tesla's? They require a well defined road line otherwise they get confused.

My friend owns a Model S and on the way to lunch it got confused on off ramps and parts of the road that were not well defined by lines. It basically shut off autopilot and required my friend to take the wheel.


It should take a lot more than a bit of paint on the road to upset a self driving car.


That they get confused and shut off autopilot shows they do not get clean correct data.


This should be much improved in version 8 of the Tesla OS going out Wednesday.


Driving based on GPS instead of actual road perceived through cameras would be a disaster.


Localization is a really hard problem!


I'd argue it's relatively recent developments in deep learning (enabled by GPU and other advancements).

Driving with purely cameras isn't too far away.

See Comma's research: http://comma.ai/research.html


2004 well predates the deep learning leaps we have made recently.


Things have improved on many fronts since the 2004 challenge.

Arguably the biggest difference between the 2004 challenge (best distance was 7 miles) and the 2005 challenge (5 vehicles finished the race) was simply experience and refinement of existing technology. Stanford and CMU (top finishers) completed the race with very different approaches.

The Velodyne lidar was a prototype in the 2005 challenge and the first commercial version was extremely valuable to the teams that used it during the Urban Challenge. Google relied heavily on it and similar sensing technologies while developing their fleet, and many other groups going after full autonomy have also relied on lidar, so it certainly continues to be important. There's lots of different opinions on the future value of lidar vs vision. Camera quality and processing power + the effectiveness of CNNs are pushing a lot of people towards vision as a primary sensor.

There's also been a lot of progress for driving in urban environments in modeling, mapping, and prediction. A lot of this comes from collecting lots of data and building maps and behavioral models for objects the vehicle will interact with (cars, people...) Obviously for that the advancements in ML and deep learning don't hurt.


One interesting thing to me is that today we are able to collect way more data around driving than before. Since 2004, almost all of us have an incredible video camera in our pockets capable of recording our driving and the road conditions. We also almost all use highly detailed maps that we correct and add to (waze)! Finally, we have companies that care deeply about technology and data (Uber, Tesla) with fleets of cars on the road. These advancements mean we can improve on such software faster than before.


Its a combination of all with a bigger emphasis on software. Better software allows the hardware to improve and vice versa.


Machine learning algorithms and hardware became much better.


Crowd-sourced machine learning


When will a degree from Udacity have enough credibility to get you hired as a:

  Consultant at Bain or McKinsey

  Developer at a Big 4 tech company

  Investment banker on Wall street

  Other highly competitive jobs
Has it happened a lot? Once? Never?

(in a situation where experience alone wouldn't have earned the position)


I doubt it will ever happen.

The purpose of top degrees is signaling. Udacity, by its very mission (trying to democratize education), cannot offer that signal.

Of course, people with Udacity degrees likely already do get top jobs. They're not using the signal of the Udacity degree to get them though.


We have graduates at Big 4 Tech (Google, Microsoft etc.) and other competitive tech focused companies. I'm not sure about Investment Bankers but we teach software engineering primarily so I wouldn't expect many graduates go into I Banking or Consulting.


Can you confirm you have Big 4 tech graduates who were hired fresh from getting their Udacity degree without prior experience?

For example since I attended a mediocre state school these companies did not come to interview at my school, and never granted interviews to those who directly applied.

A few years later I ended up getting hired by these same companies based on the merit of my actual experience.

However you see the big difference here? I couldn't fairly say that the big 4 hire from mediocre state schools (or from Udacity), just because someone ended up getting a job there later.


Hey!

We do indeed have people who got jobs fresh out of graduating from our program. Obviously it's hard for me to discount all their prior experience and say it was entirely up to us. What I can confirm is that within months of graduating from our program they were able to get these jobs. You can read about some of these students here: https://www.udacity.com/success


Speaking of the prestige issue that WhitneyLand brought up, it seems that both of the featured engineers who are working at a "Big 4" software company attended top schools. One seems to have gone to Harvard and the other attended Rice University.


I think he's talking about the tech side of trading that occurs at banks and also a lot of high profile prop trading shops.


When Udacity is as difficult to gain admission to as the schools McKinsey and Goldman typically recruit from?


I'd be willing to bet money on the fact that anyone who got a job at a top consulting firm or an investment banking job at a bulge bracket bank was not hired primarily as a result of a Udacity degree. Hiring pipelines and signaling are a big reason why a degree from a top school is worthwhile and why top schools are not threatened by MOOCs.


If it were possible to short Udacity, I'd be putting all my money into a short position.

This is obviously so far outside of Udacity's core wheelhouse that I have to assume it's simply an ego project for the founder. Unfortunately, before you start pursuing unrelated ego projects, your company should have several billion in cash in the bank.

I cannot possibly see how this ends up working out well for Udacity. Developing self-driving cars is very expensive and not their core expertise at all.


I think it's mostly meant as a class project, and that seems like something in their wheelhouse.


Also seems very smart of them to have partners like Mercedes. They're likely offsetting a huge portion of the cost for the program, and it's all from taking advantage of the climate where everyone is dying to get a head start in self-driving.


also Thun has a history in robotics/ai/machine learning so it is in his wheelhouse. Solar, Electric Vehicles, and Space travel were well outside Paypal's expertise, but one of it's co-founders went on to revolutionize all three of those industries... and many people called him crazy for even considering to launch his own rockets into space.


The User/Volvo/whatever approach may deliver earlier value, but the Google approach is what's needed to have safe cars in uncontrolled urban environments.

I get quite frustrated with articles and headlines and even analysts who don't understand the quite fundamental difference.

Uber, who lives almost entirely in that space, will be in for a rude awakening IMO. Unless they already know they are ten years behind Google, and plan to use instrumented human semi-operated vehicles for learning, and cynically market it as "self driving" when it isn't. That'd work, too.


I think this is peak self driving car.


I think it's a marketplace in its early infancy.


Agreed. Some of the biggest companies in the world are investing in the space: Apple, Google, Ford, GM, Mercedes Benz, Tesla, Uber, NVidia, Baidu, Didi and many more!


It's not peak until we have a wave of absolutely fraudulent schemes designed to take money from the gullible general public.

"Kickstarter - Self-driving car - release in 2019! 4795% funded! Backers get one for just $10,000!"

"Stock tip - self-driving car startup - invest now to get pre-market shares in this company! Top Secret!"

That sort of thing. When the TV morning shows start pumping self-driving cars every day.


peak self driving car would be proclaiming that there is a severe talent shortage, and acquihires are being valued at 10m / head.

oops.. Ex-Googler Sebastian Thrun says the going rate for self-driving talent is $10 million per person http://www.recode.net/2016/9/17/12943214/sebastian-thrun-sel...


Ha ha that is also a great indicator of overheating. It's like people think a great product will be available in a few years and we don't even have the barely tolerable version yet. There will have to be the Treo version of the autonomous car before there is the iPhone version.


Not at all. There are still many issues to solve. Some are too expensive to be approached by smaller efforts.


I meant the peak of the hype, not the peak of the work required. What if this is like virtual reality in the 90s?


I don't think its like VR given how there are real use cases for it and the technology is advancing quickly. Plus its going to make/save companies money (VR was seen as a cool but not money in it thing back then).


But in reality we don't even know how far we are from level 5 self driving car.


We don't know how far (or if it will ever happen) we are from level 3.


Wonder if part of Thrun's departure from Google is their closed focus? Open source self-driving car is something Google would never allow under current leadership.


With the recent commercial competition catching up, I wonder if Google won't reconsider..


I don't see any commercial company catching up with the really hard problems like "pedestrian behind hedge" or "policeman's hand signals" or whatever. What I see from the commercial companies is little more advanced than "automated lane keeping and sometimes recognizing a red light, at least when cars in front stop." (OK, in being a little uncharitable, but it's to illustrate the problem.)


Bear in mind, when you're looking at making a commercial product, you look at cost. Presumably, that's why most companies have ruled out Google's sensor setup, which costs more than my house. As costs drop, sensors will probably be added or improved, and you'll see more companies integrate those features.

Google's program is a tech demo for PR purposes. Everyone else is trying to actually put systems in cars.


Most of the projects gunning for L4 have sensor arrays comparable to Google's, with the exception of Uber, who has 3× more: 20 cameras and 7 lidars amongst all the rest. I'm pretty sure Uber's in it to develop a commercial product, although the PR probably doesn't hurt either.

Thrun guesses that Lidar will ultimately be unnessecary, and I think that by the time the hard AI problems are solved, he'll be proven right. Humans navigate with a pair of eyeballs and not much else, after all. But while extraneous sensors can always be removed, not having enough could hamper progress.


> Humans navigate with a pair of eyeballs and not much else, after all

Only if by "not much else", you are referring to a ridiculously performant image processor - the visual cortex does an amazing job! My guess is it will be many decades before we can get similar performance in hardware/software.


We have ridiculously performant image processors. The dynamic range and light sensitivity of top shelf CMOS sensors continues to improve, and consumer tech driven image processing software which can extract useful data from all sorts of lighing conditions has made leaps and bounds.

Behind that is computer vision, revolutionized by deep learning. It can identify street signs, makes and models of vehicles and other road objects, it can keep a car centred in a lane when there's limited information such as snowy conditions, or with chaotic visual information such as a sun dappled country road covered in leaves.

Google has patented a method for interpreting the hand signals of police officers. They can read cyclist signals, and interpret the body language of pedestrians to interpret whether they intend to cross the road or not to avoid false braking incidents.

And Nvidia has demoed a way to build 3d point cloud fields (like Lidar) using off the shelf cameras. Robust computer vision is heavy on compute, but they're doing it.

But behind all this is the higher level reasoning needed to deal with tens of thousands of edge cases- this is the hard part.

Google made a special project out of programming their cars to effectively assert themselves at four-way stop signs, which is one problem amongst many.

No single edge case is unsolveable, but taking them all on, and getting the whole rube goldberg machine to six sigma reliability is an epic slog. This is the part that's going to take another 10 or 20 years, though we'll probably see L4 applied in a constrained capacity fairly soon.


> We have ridiculously performant image processors. The dynamic range and light sensitivity of top shelf CMOS sensors continues to improve, and consumer tech driven image processing software which can extract useful data from all sorts of lighing conditions has made leaps and bounds.

That may be true, but we are still ways from catching up to human performance on things like "where is the edge of this dirt road"


Or Google is trying to solve L4 and doesn't care about lower levels, while others (esp car manufacturers for safety, and Uber for PR/advertising) see value in detouring to L2/L3 goals.


Or, Google has no intention of releasing a real product, and by setting a far off goal they know they won't reach, gets to milk the PR for a really long time. :)


Sweet! Imagine Udacity later licensing K1 Attack kit car with electric drive from PLA, and open sourcing it for everyone! One can dream ;-)


What is the obsession with self driving cars?

There are so many other important problems in the world that need solving


Car accidents kill 1.3 million people a year. Sounds like a worthy obsession.


I'm sure someone can make the same argument for every research.

If self driving cars become mass marketed, then that's reduce death/injuries, less energy consumption/pollution from more efficient driving, more time saved from efficient driving, possible reduction of manufactured cars from a sharing economy standpoint.


Curious what opensource stack they will use (like ros), or if they will write their own.


There's even a ros project for that https://github.com/CPFL/Autoware


I am curious how ros plays with real time requirements? Does it have rtos extensions?

I used ros a bit for drone simulations 5 or so years ago, things might have changed these days.


As part of its new nanodegree?)


This announcement raises all sorts of red flags, not just for Udacity in particular, but for Silicon Valley as a whole.

This is the story:

Online correspondence school announces it's making a self-driving car, and issuing "nanodegrees" of dubious reputability.

I'm sorry, but what does a correspondence school have to do with self-driving cars? How does this promote the core business? How does even relate to the core business? Does Udacity have any technical talent that would even be relevant to this? (I'm betting they don't have any computer vision experts.)

This strikes me as a company with too much money, and not enough supervision.

This is not going to end well, for anyone.


"Does Udacity have any technical talent that would even be relevant to this? (I'm betting they don't have any computer vision experts.)"

Sebastian Thrun is a cofounder of Udacity, and one of the world's top experts in self-driving cars. Sebastian Thru won 1st place in the DARPA Grand Challenge in which self-driving cars raced across the Mojave desert. He is also a VP at Google, where he has worked on Google's self-driving car technology. One of the first courses at Udacity was called "Artificial Intelligence for Robotics."

https://en.wikipedia.org/wiki/Sebastian_Thrun




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: