Hacker News new | comments | ask | show | jobs | submit login
The first person to hack the iPhone is building a self-driving car (bloomberg.com)
949 points by bcg1 on Dec 16, 2015 | hide | past | web | favorite | 447 comments

Prototypical case of the 80/20 rule. He has implemented the happy case. But that system is nothing people realistically would want to drive their cars.

What he did is impressive. But the results are not that outlandish for a talented person.

1) Hook up a computer to the CAN-Bus network of the car [1] and attach a bunch of sensor peripherals.

2) Drive around for some time and record everything to disk.

3) Implement some of the recent ideas from deep reinforcement learing [2,3]. For training, feed the system with the oberservations from test drives and reward actions that mimick the reactions of actual drivers.

In 2k lines of code he probably does not have a car model that can be used for path planning [4] (with tire slippage, etc.). So his system will make errors in emergency situations. Especially since the neural net has never experienced most emergencies and could not learn the appropriate reactions.

And guess what, emergency situations are the hard part. Driving on a freeway with visible lane markings is easy. German research projects autonomously drove on the Autobahn since the 80s [5]. Neural networks were used for the task since about the same time [6].

[1] http://www.instructables.com/id/Hack-your-vehicle-CAN-BUS-wi...

[2] http://arxiv.org/abs/1509.02971

[3] http://arxiv.org/abs/1504.00702

[4] http://www.rem2030.de/rem2030-wAssets/docs/downloads/07_Konf...

[5] https://en.wikipedia.org/wiki/Eureka_Prometheus_Project

[6] http://repository.cmu.edu/cgi/viewcontent.cgi?article=2874&c...

A project like this is extremely impressive. The guy deserves a lot of credit (and maybe some investment?). That's hacking in the truest sense.

The parent's checklist misses a bunch of things. For instance "1) Hookup a computer to the CAN-Bus network of the car". That alone is not trivial. It is trivial if you want to read the car's odometer, but good luck doing more than that. For instance, people are still trying to make sense of the reported battery cell voltages in the Nissan Leaf. All interesting features are not documented and require serious reverse-engineering. "Hooking up to the CAN-Bus" can easily become a task for a whole month, full-time. Not to mention that the most useful features for the self-driving part are probably not accessible by the CAN-Bus - people are still trying to unlock the doors of the aforementioned Nissan Leaf. Steering, acceleration and braking are unlikely to be on the CAN-Bus "2) Attach a bunch of peripherals" is also hand wavy, and same goes for the rest of the post.

It would be like dismissing SpaceX accomplishments by saying: "1) Build rocket frame. 2) build engine 3) Program flight software 4) Fill up the tanks with fuel 5) Push a big red button". The devil is in the details.

With that out of the way: if the events happened as described, this guy should be convicted of reckless "driving". Taking a prototype that had only started working a few hours prior to an actual test run in a freeway with other cars is insane. What about some simpler, more useful and less dangerous goal? Such as a lane-departure warning add-on for cars which lack that capability?

The article title is the worst part though. It's not "clever dude created a self-driving car prototype by himself". It is "Dude is taking on Tesla by himself". Which is bullshit.

EDIT: Fix typo.

It's only impressive to outsiders who aren't aware that this hasn't been new for a very long time and is reusing the work of others. There are tons of videos and documentation from amateurs and hobbyists hooking computers up to the canbus. In parallel to the tech community, the tuner/mod community have also been doing this on their own. It's been old news for years, led to many funny pranks and stunt hacks, culminating in charlie miller and chris valasek's media stunt last year.

How is it not impressive to take bits of knowledge from multiple domains (programming, instrumentation, electrical engineering, control laws, etc) and fuse them together into a single thing?

We all take from the good work of those around us. But how many people seriously do things with that work? Not many, and disdaining people who do so is not productive or, in my view, a good thing.

I think because the goal posts keep moving with technology. The number of people who have ever combined knowledge from multiple domains into a useful thing may be small relative to the general population, but it's been done. The first time it's impressive. Then others add different ideas and concepts. Then everyone can do it and it feels old.

We've also seen all the news from Google about their efforts and the pain points that they are experiencing. And this guy cobbles some stuff together and just puts it on the road. Most of us are not as smart as this guy, but that's just irresponsible. That just puts a bad taste in people's mouths.

>but that's just irresponsible

It's not like it's unsupervised. Is it any more dangerous than taking a learner human diver out in a car?

yes. a bug in the programm and it takes insane measures (i.e. breaking and steering hard right), salvaging that situation is impossible at higher speeds. its doubtful that a beginner would do such a thing, and even the attempt would take longer, giving the supervisor more time to interfere.

Isn't that the same for most self-driving technology? Computer vision toolsets aren't new. Obviously hooking up to a car's drive systems isn't new, full-size RC cars have been built for years for various reasons. None of the rangefinding hardware equipped on self-driving cars is novel.

Where's the sudden breakthrough? All of this is built on technology and work that came before it. The whole field. It probably only really started being worked on in earnest from a business context because big tech companies like Google had more money than they knew what to do with, and were willing to spend it on ventures with no likelihood of profit any time soon.

Everything that you have ever done in your life has been about reusing the work of others. When was the last time you mined your own copper ore and created your own wires, with a pickaxe you built yourself?

We have to draw the line somewhere.

Google's first Udacity class taught how to build a self-driving car. The basic algorithm is simple and produces a fairly safe vehicle. In no way should it (or other) have been tested on the freeway as described in the article, however.

"Prototypical case of the 80/20 rule. He has implemented the happy case. But that system is nothing people realistically would want to drive their cars."

I'm painfully aware of this. Ten years ago I ran one of the 2005 DARPA Grand Challenge teams. That's about what we produced with less than three full time equivalent people. We didn't have to handle other vehicles, but we did have to handle off-road conditions. Ours didn't make many mistakes, but it was very conservative and kept stopping to rescan its environment with a line-scanning LIDAR on a tilt head.

I'm scared of happy-case automatic driving implementations. Tesla went down that road and had to back up, removing some features. Cruise's PR indicates they were going that way, but they now realize that won't work.

I am really, really interested in the work your team did. Do you have links to your body of work that I could sift through?


> Guy builds a fucking self-driving car. By himself.

> Not that outlandish for a talented person

What planet are you living on? I don't know what you did today, but I played with some jquery animations. This guy drove around in a self driving car that he built himself. It doesn't solve for edge cases? Neither do 90% of CRUD apps. Holy shit.

Give some credit where credit is due. This is not an ordinary or average outcome.

Well, he didn't actually build the car. He built a system that operates the existing car's steering and speed controls. And comparing the proper software solution to a CRUD app undermines the work that's been done by the big players in the space for the past several years.

Well yes, he only built the software that operates a car without a person driving it, connected it to a car, and did it all by himself. One person.

My point was that what 99% of HackerNews does is likely nowhere near as interesting or as difficult, so when the top comments are all shitting on someone who did something that's actually pretty amazing, HackerNews can go to hell. I mean that from the bottom of my heart. I'm done here.

Is it so much to ask that people don't keep erroneously stating he built the car? The car he used was an off-the-shelf component.

What he did was extremely impressive. But he's up against really high expectations. People expect him to have made massive breakthroughs in self driving car technology. Against that expectation, it doesn't seem so impressive.

That is true. But getting a car to auto drive by a single guy in a month is really super human level.

Why is the first comment on HN minimizing this truly impressive project? Of course it's not perfect, he's ONE person.

Because this article reinforces our bizarre cultural notion that one person deserves all the credit for some innovation.

Nevermind the ridiculous amount of engineering that was required to build all the tools he's using and the order of magnitude more engineering required to make this a safe, mass producible product.

But nah, let's just praise the founder, allow him to get rich while we all do the dirty work.

This comment is depressingly cynical. This is probably the single best definition of "hacking", as the community often refers to it, that I've seen in a very long time. One guy starts working on something only the biggest companies in the world dare attempt, throws together a minimal prototype built on top of existing technology. Just look at the picture of it.

Claims of commercial viability or beating Tesla are a bit ridiculous, but this is pretty damn amazing.

I think it's a fair comment given his quote "I know everything there is to know" and the headline of the article claiming he's "building a self-driving car by himself". I've always thought the "hacker" community attributed value to sharing and building off other's work, but maybe times have changed.

Have you ever spoken to a journalist? It's their job to sell clicks with charged headlines and over-blown quotes. If they followed you for a day I promise they'd generate some equally stupid quotes.

Yes, that kind of journalism exists, but does it have a place here?

Seems obvious to me that the journalist was manipulating what he said which was that he is deeply familiar with the state of the art of AI tech.

To expand on that: '“I understand the state-of-the-art papers,” he says. “The math is simple"', which seems like an attitude of someone without a solid understanding of ML. But who knows, maybe he's figured out something the rest of the field hasn't...

If this is the "hacker ethos" then I want nothing to do with it:

> “I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”

> “ ‘If’ statements kill.”

> “I want power. Not power over people, but power over nature and the destiny of technology. I just want to know how it all works.”

Hotz is pretty eccentric, but he's also pretty incredible. While technology wouldn't progress very far if everyone was like Hotz, I also don't think it would get very far without people like Hotz.

Exactly. You can appreciate aspects of a person and the work they do without deifying them in their entirety. Linus Torvalds and weev (being an extreme example) fall into this category.

Agreed. I can appreciate people who can go heads down and get things done. Where it falls apart for me is when those people get deified--or deify themselves, like that last quote demonstrates. And when they demonstrate an unwillingness to collaborate.

Then may you get what you want.

He implemented methods from the literature using a sensor specifically designed for this application. Grad students do this for class projects.

It's not "one guy starts working on something only the biggest companies in the world dare attempt" though, it's something hundreds of people have been doing for years now. He's more of a media hacker than he is a car hacker.

You're just easily impressed, that's all :)

1. Make a working prototype

2. Impress investors

3. Hire others to finish the job

4. Profit

Tesla customers invest in Musk. Musk invests in Hotz. Hots invests in developers. Developers in researchers. We're all delegating until we find someone who can finish the job. We're investing in people to hire the right people.

You're assuming that Hotz then does nothing, and also Musk. You really think Musk is sitting around with all this free time not doing anything? You don't think it would all fall apart without the key people still in key roles?

It doesn't work this way. You just move up the value chain.

He didn't write the article, it's not his fault it comes off as cocky. He is tackling an impressive project on his own, and spitting in the face of corporations. He should be giving talks at DEFCON about this, teaching people how he did it.

Your comment screams a superiority complex, but I bet that you are actually a nice person in real-life. Hotz is doing good work, and everyone in the technical field is relying on work done decades before they were born.

He should give talks at defcon teaching people how he did it?

You mean like this one, from Defcon in 2013?


Or The Defcon 19 How To CanBus Hack Workshop, that taught classes about this in 2011?


Unfortunately for him, defcon prefers original content, not someone claiming credit for what has been demonstrated repeatedly in previous years.

He sure didn't write the article, but looking at him and what he's saying on the video gives me the same impression of cockiness. But I bet he's actually a nice person in real life. ;)

It's an impressive personal project, no doubt about that. It's however also important to recognize the difficulty of having a system that works in mass production and handles all kinds of situation. Like someone said earlier it's easy to have the car drive in clear day with very visible markers. The hard part is when it rains, when it's foggy, when things are less optimal, etc.

Once he get to that point he'll find that part to be a lot harder than what he's accomplished so far.

A good point. I did some HTML for my elementary school when I was a kid, and the local newspaper put me up as a 'whiz kid' on their front page. Not that anything I did was shockingly complicated in the slightest, even for the web of that era. Journalists hype stuff, that's nothing new.

Reminds of the quote from "The Social Network"

>> "If they had engineered a self-driving car, they would've engineered a self driving car".

You are just being absurdly accusative in your comment. That guy has done a self-driving car with nothing but tools available in the market.

Putting a prototype self-driving car on actual roads without understanding the difficulty of that project seems like a legitimate, substantive criticism, and I don't care how many people are involved in the project.

Now, that's a very valid criticism. I don't care about his personality, but testing the car on live roads is asinine. And, this journalist jumps in and excitedly plays up how he was afraid for his life, etc.

Yeah, instead of going after the sensationalism, how about you discourage him from endangering the lives of others who weren't given the opportunity to make such a stupid decision?

Yes, It's one person. WE GET IT.

This article is a hero worship piece about a guy rather than a story about the technology. It's like how you can't find an article about Theranos that isn't actually just a photo-shoot/celebrity worship article about its founder.

Early investors love genius superheros because lesser investors are willing to pay a premium to invest with companies with a superhero and a story.

The guy clearly is technically brilliant. But I was referring to the results. Not how the results were achieved.

In the video, he claims to want to achieve level 3 driving. Let's see how he can do the following under non-perfect conditions:

- switching lanes

- stopping at lights

- turning cornerns

- turning left corners in traffic

Then we can move on to the more difficult situations.

I feel as though there are improvements that can be done in the environment of driving;

Have all stop lights have a beacon that will tell cars their state, "the light is green on north south, red on east west"

Have exit signs have beacons as well.

Have you thought this out? What happens when someone hacks their own beacons for lulz? So then the beacons have to have public key cryptography. Now all of the firmware will need to be audited and kept updated. Will there be over the air updates? What if someone cracks or steals the key? It seems to me that a target as juicy as "getting control of the North American road network" would be worth a major national power throwing a significant fraction of its resources at it, so that inflates the computing power such devices will need.

What if someone would install their own traffic light right now?

That's immediately visible to people with eyeballs. The first sign that something is going wrong isn't going to be a car colliding with another car. It's going to be, "Hey, why are those kids installing a light with a step ladder?"

Have you seen a traffic light? They're pretty substantial. How long would it take for you to make one in a hackerspace?

Contrast this with hacking OTA updates for traffic beacons. You might not even have to change any atoms around to do your dirty deed. You might not even have to be there physically.

Why would it need all that if the only thing it's doing is simply announcing its state?

So, 'simply' invest in billions of dollars of infrastructure improvements that will serve less then one percent of vehicles on the road?

It's a custom on HN. Look what happened when DropBox was announced on HN: https://news.ycombinator.com/item?id=8863

One person can build something that starts a revolution. See Woz/Apple.

The real issue here is that self-driving cars are probably the wrong place for that to happen in AI. At best, a solo project creates a crappy prototype where there was no product before (again, see Woz/Apple). The expectation for driverless cars is too high – they need to be 100% good, because your life is on the line, not 80% good.

What's the AI project that would blow people away, even if it was a shadow of a working prototype? I think that's the real question.

I don't follow why AI vehicles need to be 100% good. Plain old human-driven vehicles sure aren't and we accept their utility as being worth the trade.

Imagine the day an AI vehicle causes an accident that otherwise would not have happened.

Even if AI cars are statistically better than humans on average, it's an issue of control. It's true that most accidents are avoidable and caused by human error, but most people are (perhaps overly) confident in their own ability to drive safely (this is also why people text and drive).

We, as the flawed beings we are, can't accept both giving up control and not getting guaranteed safety as a result.

In one word: liability.

In two words: actuarial tables

>What's the AI project that would blow people away, even if it was a shadow of a working prototype?

Robot that can build a better robot?

It's neat that one person did that. But debugging on-highway? Bad idea.

Finding a safe place to test an autonomous vehicle on a budget is hard, but not impossible. Our initial testing in 2004 was in a large unused Sun parking lot in Fremont.[1] (Sun got carried away with expansion plans, and started building a big facility there. They paved the parking lots and poured the building foundations, then stopped construction.) Later off-road testing was at the Woodside Horse Park. We also looked into testing at the Hollister off-road vehicle park, and discovered we could book a sizable area on a weekday for our exclusive use. We never used that, though. We'd also looked into using the old FMC tank test track in San Jose, but never found a good contact there.

[1] https://goo.gl/maps/8CZsJZ6SPbA2

Who cares about how many people built it? What matters is the end product, which is something that existed in the 80s.

Because "minimizing" is an emotional notion, and it's irrelevant. But providing a response to over-enthusiastic reception is informative if only because it presents the other side of the issue.

In other words, I don't care if this guy is painted as a genius or script kiddie. He's not relevant in my life, and I will forget about him a week later. However, the lessons about machine learning and engineering that I can find in thsi article is the reasons I subscribe to HN (yes, I don't really know shit about these topics, and don't have enough time to fill gaps with real sources), and this comment is the most informative, just because he tries to cover what the article didn't.

"Built a self driving car" set the bar way too high for what this guy did

I think it's fair given his mission to "crush Mobileye".

He built an impressive prototype, considering he hacked it together in a month.

This is just the start, not the end.

I'm afraid this may be also the end, more or less. From this point onwards, things get so much harder and more labor-intensive, that doing everything alone seems impossible.

I don't see why it would. Once you get enough base data you can start simulating the data from what you have, inputting different scenarios without actually encountering them IRL. Faking sensor input and randomizing should get it most of the way there.

When lives are involved handling edge cases is everything. The person stepping off a curb, the cyclist that falls in front of you, the car that weaves in its own lane and can't be used as a reference, traffic lights that are out of order, stop signs hidden by trees... and on and on. Mess one of these up while autonomous and severely injure someone and you're done.

Human drivers might only see one of these cases a month, or 6 months, but not driving over someone in that case is what is critical. Not saying it's an impossible task, but IMO it will require a lot more training data than humanly possible for one person to generate.

I have significant experience in faking sensor data (specifically radar), and can tell you from it that fake sensor data is terrible. There is way too much going on in the real world to accurately create sensor data without actually recording sensor data. That is, you can manufacture the situation for the sensor to capture much more effectively than you can manufacture the data from a model.

Even pseudo-faking like we were trying to do, wherein as generated signal is injected into actual, recorded background noise, is fraught with problems. Anybody who tries to develop a control system based solely on such data is in for a rude awakening when they try it for real for the first time.

Faking input is how most people test their buggy, crappy software. It rarely matches reality.

Ah the naivety. Just like 50s AI research :D

History keeps repeating

You're probably right (since pessimism and cynicism are pretty successful predictors whenever anyone is trying something bold), but we have nothing like enough information to know if he's done something revolutionary or not. As the article makes clear, he doesn't want to give too much away, so of course you're stuck with a vague summary which sounds like he's just done what any smart person skilled in the art would do.

General cynicism isn't really adding much to the conversation in my opinion, since almost everyone here probably knows this already, and too much cynicism can put people off starting projects and people starting projects is something we should cherish.

I do think your point about emergency situations is substantive though. Perhaps he is only planning for self-driving while supervised by humans, but his idea for training as described (become an uber driver) would not at all produce the kind of dataset that would assure me that I would be safe. I think a lot of training with advanced drivers in simulators where you can have crazy life threatening situations would be the absolute minimum. I'd be worried that bad habits picked up on the thousands of uber rides would kick in during an emergency rather than the couple of situations that would be feasible to train on in real life.

With Neural Nets, training the AI to handle emergencies will be all about exposing as many emergency situations as possible.

What's better about an AI powered by neural nets is that you could train an AI to go offroading.

Get enough data and you've got a model for dealing with a given situation. Google's biggest strides with OCR, Voice recognition, Spam filters and other AI tech early on came from its ability to gather a huge corpus of data.

The real challenge is two fold. Gathering data, and feeding the AI with inputs with data actually matters. This is the secret sauce that Hotz refers to in the article as the information he's not willing to disclose. That information will become commoditized in due time (like low-latency optimization for HFT), but it will take plenty of institutional money & experience (Google, Apple, Tesla, Ford, etc) to get it there.

Using neural nets to deal with emergencies runs you into the Anna Karenina problem - "All happy families are alike; each unhappy family is unhappy in its own way."

It's fairly easy to train, and verify a system for driving in well-behaved traffic. Unfortunately, the problem space of not-well-behaved traffic is far wider - and is very hard to gather enough data to train a system well.

What you're going to get is self-driving cars which handle 99% of driving just fine - and when they end up in emergencies, find the human 'driver' to have dozed off at the wheel. (All-in-all, their safety record might end up better then the status quo - but that's not a certainty.)

The trouble with using neural nets for safety-critical real-time systems is that it's really hard to do the necessary level of validation. You can't accurately predict how the system might react in totally novel or unexpected situations. Which isn't to say that human drivers handle those situations well, but most of the time they don't do something totally bizarre or dangerous.

Humans totally do things that are bizarre or dangerous when in shock, but we've come to accept that as a personal responsibility and a price the society has to bear.

We have millennia of experience in regards to estimating how people will react to various shock situations and what constitutes those situations. It's intuitive.

The reactions of NN to unusual stimuli are likely to be counter-intuitive at best and unpredictable at worst.

Because we can't re-engineer humans into rule-based automata. (And we probably shouldn't even if we could.)

Machines, on the other hand, are a different story.

Exactly. People may swerve and over-correct, causing their car to flip, for example.

Electronic stability control (standard in US passenger cars since 2012) has already mostly solved that problem. http://www.safercar.gov/Vehicle+Shoppers/Rollover/Electronic...

Human error when driving a vehicle is one of the top causes of premature death globally. That is what we should be measuring the technology against, not perfection.

It seems that the technology has already, or is very close to approaching human levels of proficiency on the road. If specific use cases (offroad, snowpacked road etc.) are problematic, they can be limited or prohibited in the mean time.

Doesn't it seem possible that we could start testing the cars in a simulated environment?

Simulated environments aren't accurate enough (inputs are too clean, other drivers don't act real, etc) and would end up training the software to do the wrong things. A more reasonable approach would be to record the activities of multiple safe human drivers across a wide range of situations and then train the software to act like them.

He said for testing. Not for training the neural network. But just seeing how it behaves in various situations, to find it's flaws, and see if it's ready for the road.

Sure, but the real world is 100x more complex than simulation.

Have you built deep learning models before? Neural nets are not magical boxes that you stick data into and instantly get great, generalized, robust models at the other end.

I have no inside knowledge, but I would be very surprised if Google's self-driving cars use neural nets as a significant component right now (which isn't to say there aren't people exploring its use).

>He has implemented the happy case.

I could probably do this using ROS, opencv, and pcl. At least on a level where the car could recognize the road well enough to drive on it, but I imagine both my car and his car are nothing that any sane human would want to sit in. That last 20% focusing on safety and edge cases is going to be 100x the work/innovation/testing/staff/code/talent/smarts here.

As a side note, I am intrigued by the idea of a FOSS self-driving car. Its a little worrysome we'll never see the code Tesla, Mercedes, etc are using.

I don't really see your complaint here. He did build A self-driving car, not THE self-driving car. Its an impressive hack as you noted, maybe it can turn into something bigger with more time and energy. So what is the point of shouting this down with an "ITS INCOMPLETE", it wasn't as if this is a KickStarter promotion or even a product. Geez. Get back to hacking.

It's more of a lane-assist in good conditions, but not expected to navigate city streets or handle unexpected conditions.

So yes, it's an amazing project for a single person, but it's not really a self-driving car.

>Prototypical case of the 80/20 rule.

Could you explain on what basis you claim this? Do you have intimate knowledge of his prototype, the amount of work he put in, or the novel ideas that he brought to the project in addition to integrating pre-existing tech?

From what I can understand, your argument seems to be "lets see if I can guess what he did". If you're an authority in this field, then your guess could be very accurate I suppose.

That's the feeling I got from the video too. Maybe he tried too hard to make it appear as 'this is not as hard as big corps say it is!' but it also felt like 'hey, ML + basic CAN controls = self driving !'.. then I disagree. I want a computer with some general knowledge of physics + ML, not just abstracted drivers pattern on self play.

“We’ve figured out how to phrase the driving problem in ways compatible with deep learning,” Hotz says.

OK, maybe it's BS, but he's not saying what you say he's saying.

Why is deep learning this magic pixie dust you sprinkle on anything and it works? Have the people who are suggesting this actually gotten deep reinforcement learning to work on complex, long-time-frame, real-world continuous control problems before?

I would be very surprised if you got deep reinforcement learning to perform well on a self-driving task, even on a highway. If you did, well, your faculty position at Stanford is waiting for you.

they're very powerful calssificators and from the outside it seems they can learn to distinguish between arbitrarily complex states.

more states? add neurons! more search space? add layers!

except they have unpredictable resonances, especially multi-layer networks:


they're just starting to understand this, but I believe the myth of the 'do it all dnn' is gonna die. it's time to start thinking about cluster of independent neural network, supervising each an independent aspect of the search space and/or each other.

It's pretty cool that it can be done on the cheap, though. I imagine a lot of people would be willing to pay a couple hundred dollars to retrofit their car to get autosteer alongside their cruise control feature.

Small personal example: my family lives out in the suburbs. My dad works in a neighboring city. His commute is about a half hour, 20 minutes of which is a straight shot on a major highway. I'm sure he'd be willing to pay a few hundred to reclaim that 20 minutes each way to read a book/the newspaper, check his email, browse the internet, etc.

It'd also be good for road trips.

I certainly wouldn't want to add amateur autosteer to my car, or accept the responsibility that comes from hacking my own self-driving car. The big manufacturers will accept liability for their systems -- build your own (or hack a factory system), and you're on your own, personal auto insurance may not even cover you since you weren't driving.

On the "cheap" relatively. The sensor he uses on the top of the car alone costs $8000. If you want to do it right, you'd also need a really nice IMU system to... I'm not sure what he's using but they can get very pricey.

Do you have a link for the best (commercial) IMUs around/how much they cost? I'm curious -- are they just clusters of MEMs like the ones on a phone or something more advanced, like interferometry based?

Define "best". We've used a quarter million dollar one at my current company, and at a previous job we spent far, far more than that for military airframes.

Groves "Principles of GNSS, Inertial, and Multisensor Navigation Systems" contains a good description of the various technologies used and their accuracy limits for navigation.

Consumer grade MEMS are fine for airbags, the pedometer in your phone, but are not sufficient for intertial navigation, even when aided with other sources. At around the $2K-30K you get systems that can provide accurate navigation for up to 2 minutes or so. They are used in things like missiles.

Aviation grade IMUs need to meet SNU 84 standard, which requires a maximum horizontal position drift of 1.5km in the first hour of operation. These will run 100K and up. Marine grade (subs, rockets, ICBMs) run $1 million and up, and have a maximum drift of 1.8km per day.

None of them are good enough for autonomous cars w/o sensor fusion.

I haven't paid attention to IMUs in a while. Here is one that I had some experience with in grad school:


Even when the technology for self-driving cars is developed well enough for widespread use, it won't be practical or cost-effective to retrofit existing vehicles. By the time you strip everything down, cut holes, install sensors, run cables, etc it will be cheaper to just buy a new car.

I don't know if I'd be comfortable completely giving up control to a driving computer, even on a straight highway.

I am totally for a computer telling me when I'm driving dangerously because I'm distracted or tired.

Throughout history there are many cases of the lone tinkerer who achieves the breakthrough going up against much better funded adversaries.

Take the case of the Wright Brothers who faced two well funded adversaries. Samuel Pierpont Langley had a chair at Harvard, worked at the Smithsonian and had among others funding him $50K from the US War Department. Alexander Graham Bell, the inventor of the telephone, was an avid aviation enthusiast and an already wealthy man. One of Bell's assistants was Glenn Curtiss who went on to found his own plane company.

Who would bet on two bicycle mechanics from Dayton, Ohio? No one, yet they were the first to fly.

The first popular microcomputer would surely come from IBM or HP yet it didn't. Two guys in a Cupertino garage built it and neither of them was a college graduate.

This guy may fail but I am not going to bet against him. In fact I hope they televise the race between the Comma and the Tesla. I'll bring the popcorn.

Tracing the Wright's development process, it's the first example I know of of a directed research and development program. The Wrights formulated a clear goal, identified the problems needing solutions, developed a series of prototypes aimed at proving each solution, did laboratory experiments to resolve others, invented physical theories to resolve still more, carefully documented their progress, etc.

I.e. they did much more than simply throw some ideas and parts together and see what stuck, like every other contemporary experimenter.

I agree, but the Wright's went counter to common thought at the time. Like Peter Thiel's favorite question, what do you believe that few else do?

Ever since I played around with Prolog in the nineties I came to believe, just like digital eventually triumphed over analog, that neural networks will eventually triumph over rule sets based software. I did not know when it will become apparent, but I firmly believe that it is coming.

Great observations and references @jpfr.

Learning for the AI does not have to be from real world experiences only. Simulated/controlled emergency situations would help as well! Further even if the 2K lines of code stretches a bit more to deal with unknown situations that isn't so bad either

But this is a fundamental problem: The learning approach might need 100s of examples of drivers reacting to a bicycle on a sidewalk while turning right into a parking lot to get the right training input. Or perhaps it can learn from examples of bicycles and sidewalks and driveways to do the right thing. The point is, there are millions of edge cases, so getting examples of them all for training or verification is a very large task. The alternative is to build a more general world model where it's possible to work from the other direction and gain confidence that yes, the car senses all other obstacles correctly, and yes, it has algorithms that attempt to eliminate collisions in any circumstances. That's a fairly different approach, which ends up being much heavier in terms of effort and investment.

Make the AI play grand theft auto for many thousands of hours.

I would argue that you're half there. You'd have a car that could navigate roads, but ultimately it couldn't get you to where you want to go.

Here has been dedicated on making maps at a quality level that is needed for automous cars. A few issues with the data that is available currently is that the data isn't very detailed, and you're at the mercy of volunteers (tiger(old), openmaps data), or for a company that's main focus isn't maps. (Google)


I agree this is 80/20 complete at the moment, but the gripes you have are not insurmountable if his model can truly learn with proper inputs.

What if you could simulate these conditions in a safe/controlled environment, and remove the driver from harm via remote control? Maybe build a virtual world that simulates the inputs as best as possible. That would be the cheap way, although you may lose fidelity.

If you had enough money you could build a simulated town/city, similar to a movie set, that throws all possible dangerous scenarios at you and operate the car remotely through these scenarios.

Path planning shouldn't require a ton of lines of code, really. I've seen in-use path planning and localization in the sub<2K LOC range.

In 2007 for the DARPA Urban Challenge, the Benn Franklin Racing Team used Matlab for their car. The entire thing ran on 5000 lines of code compared to similar performing cars written in C/C++ which used over 100K lines of code.


Well that makes sense, given that basically all machine learning is transformations over matrices and that is Matlab's bread and butter. The equivalent C code might perform better when optimized, but it is going to be far longer and uglier. There's a reason a lot of ML work is prototyped out in Matlab first.

I would say basically all of robotics is transformations over matrices. As for Little Ben, there was actually no machine learning involved. Planning was sample-based on an occupancy grid. Localization was map-based.

That is true, and was very impressive. Consider how expressive matlab is though - x = A\b is one line in a .m file, but can correspond to several hundred of lines of fortran.

Why take someone down when what they've done is awesome?! Just suggest improvements, be constructive.

I know it's far from production ready. But he's demonstrated an ability to put together quite a diverse collection of hardware and software and get it all to work together. I'm impressed.

I think most people would agree it's one thing to understand how the pieces work together "in theory", and an entirely different level actually building a functional prototype.

Completely agree. There's absolutely no way you could use this approach for level 4 autonomy. Level 3, fine, but not level 4.

In the video, he says his company is only targeting Level 3.

Ah I didn't see that. Fair enough. I still agree with the parent comment that the (important) edge cases like avoiding a crash will not be well handled potentially.

Can you use the Canbus to actuate?

doing complicated stuff with closed hardware is very impressive. controlling hardware is - hard.

I'm not confident we can argue Google or anyone else has done much better. You might notice Google has never announced testing their cars where snow occurs, for example.

I think it's likely that much more of self-driving car development is smoke and mirrors than people realize. Best case scenarios are promoted as examples of how innovative a company is. Great PR, not necessarily a practical result.

>I'm not confident we can argue Google or anyone else has done much better. You might notice Google has never announced testing their cars where snow occurs, for example.

This is a sensor limitation. They have fully admitted this several times (heavy rain too). Equating the fact that this guy can't handle any emergency situations with a sensor limitation all of the lidar systems suffer from is stupid.

Google has shown many times that they have logic to handle routes around obstructions, construction, etc as well as cars running red lights, pedestrians walking into the street, etc. At least read up on something before you call it smoke and mirrors.

I actually have read up on it before I called it smoke and mirrors. While this is a year ago now, this is well after their cars were heavily marketed as being pretty sufficient and capable of detecting problems. ...Yet it couldn't detect the existence of a stoplight if it wasn't explicitly mapped ahead of time. And apparently, according to a Googler, the mapping required to make a road work with Google's self-driving car system is "impractical" to do at a nationwide scale.


I'm not saying it won't ever happen, I'm not saying there haven't been developments in the technology. But people seem to have a disconnect in expectations of where the technology is, and where marketing departments for these tech companies want you to believe the technology is.

But they did solve the problem with mapping. I wouldn't call that smoke and mirrors. The implementation is difficult, but at least it's a solution.

2) It's probably not a disk. I'd bet it's an SSD, or something. I doubt he'd use an HDD considering how relatively cheap it is now... and considering how loaded he probably is.

SSD can also stand for Solid State Disk. Disk does not imply moving parts.

No. It's only referred to that because people are stupid. It contains neither an actual disk nor a drive motor to spin a disk.

If SSD = Solid state disk, then HDD = Hard disk disk.

But it does imply a disc, as in a circular thing. SSDs are very square looking discs...

It's kind of like apps still using the floppy drive icon to indicting Save action.

SSD = solid state DISK

Actually no. It's only referred to that because people are stupid. It contains neither an actual disk nor a drive motor to spin a disk.



I thought it was drive, not disk.

"His self-funded experiment could end with Hotz humbly going back to knock on Google’s door for a job."

The biggest thing here IMO is this is self-funded. Any startup trying to do what he is doing in this environment would have raised $50 Million, hired 100's of engineers from top notch schools, become accepted in YC, and have Marc Andreessen, Paul Graham, Sam Altman and all singing their praises.

Kudos to him for being self-funded.

Could not help thinking about the stark contrast between Hotz and the Theranos "entrepreneur": a. self-funded vs. VC friend funded b. demo-ing the product (try it and 'feel' it) early on vs. hiding behind a ton of marketing legalese

The funny thing is he's the type of person you'd want to put your VC behind.

And yes, that happened http://www.getcruise.com/

Ta - I was trying to remember who the YC startup were trying the same stuff. https://news.ycombinator.com/item?id=7933045

Seems they recently raised $15m http://techcrunch.com/2015/09/18/cruise-2/

Wonder how they compare tech wise to Geohot's thing

what's with making text so thin you can't even read?

I hope the design was originally with a different image. And then the image was changed, and it wasn't redesigned.

The text that isn't overlaying images is terrible too. It's too thin for subpixel rendering to look decent. There's not enough contrast for viewing on a TN LCD panel unless it's in the middle of the screen.

p { font-weight: 100; }

Oi. This is not the kind of thing I want kickstarted.

I'd prefer my autonomous cars to have gone through insane amounts of testing, regulation, etc. This is just too new of a field, and the amount of edge cases you have to handle is practically infinite.

While I understand where you're coming from, and even feel emotionally invested in the idea of bootstrapping, objectively speaking, it's a bad decision to stay self-funded. It is, after all, a business, and if you can accelerate your business' growth 100x by taking on some very smart outside investors and hire very smart people, why wouldn't you?

You might not because the goals of a founder and an investor are different.

Investors know that their returns are generated by a handful of super-successful companies. And so they have a natural pressure to "swing for the fences".

Founders have a tremendous amount tied up in THIS company, and are naturally risk-adverse.

So you get conflicts like the following. There is an initiative which has 20% chance of losing everything, but could double how much you make. Investors will always want to go for it. Founders reasonably may not.

A typical woodhead's thought. "Accelerate your business's growth". Hahaha. Hard things have to be done solo because explaining to others is slowwwwwwww.

Hard things have to be done solo because explaining to others is slowwwwwwww.

A million times this. I never really understood how hard it was to explain a (in my mind) simple new technology to the lay person until I had to do it. This is even after spending years as a technical briefer for high power executives.

What I was meaning is actually not about external investors or so. My point is, sometimes even putting more equally competent technical collaborators won't work; it's like digging a tunnel: the working surface is only that wide, an extra worker can do little more than staring at the working man's ass.

Because if all of that will distract you from actually developing the product. Granted this won't work for most people, but if you're extremely talented like geohot then it may not be a bad call.

Because creating a self-driving car is an extremely creativity-intensive exercise that demands "smartness"... but smartness doesn't add linearly (or, I could posit even monotonically). If 1 smart guy can produce 1 self-driving car in say 6 months, it doesn't mean 2 smart guys can produce a self-driving in 3 months. Once you have a bunch of people, 2nd order and third order interactions between us get complicated and managing that becomes its own time/money-sink.

As for money, yes, it can accelerate growth in its first-order effect; but it also induces stress and so threatens early exhaustion of your other precious resource: personal motivation.

So, as a crack-shot programmer, if you know with 90% certainty you can crank out a self-driving car in 6 months by yourself or fail, but only 20% certainty you can arrange a cohesive team with someone else's money to crank out a car in 1 month or fail (and alienate your team, and ruin your credit)... I would advise taking the 6 months route. Patience is a virtue and sometimes it's better not buying into every pot of snake-oil the SV hype machine wants to sell us.

Creating 1 job is better than hundreds?

Well, Hotz did state that, “The truth is that work as we know it in its modern form has not been around that long, and I kind of want to use AI to abolish it. I want to take everyone’s jobs. Most people would be happy with that, especially the ones who don’t like their jobs. Let’s free them of mental tedium and push that to machines. In the next 10 years, you’ll see a big segment of the human labor force fall away. In 25 years, AI will be able to do almost everything a human can do. The last people with jobs will be AI programmers.”

Yeah and the world will split in rich and poor people with poor starving.

What interests me about your argument is the assumption that the "poor starving" will just sit by and passively accept that.

The reason we don't have an insurrection on our hands now about wealth disparity is that while the wealth of the super wealthy has accelerated hugely so has the general living standard of the poor, if (when) the jobs go away that will no longer be the case and then you are talking about a brutal escalation into a full insurrection and while the technology and wealth will be on one side, the last 15 years in the middle east has shown what committed people with pickups and AK's can do against an on-paper massively superior opponent.

I just hope the super wealthy are smart enough to see this coming and avoid it, it would be spectacularly brutal.

Or nobody will ever have to work again.


It's a nice dream, but the idea of AI and robots doing dishes, picking strawberries, washing cars, cooking meals will never happen.

The best AI cannot beat a population of Mexicans who are basically the glue that holds out modern society together.

If you wanted to see how the U.S. Will completely come to a screeching halt, it would be if the rapture took place and only claimed all Mexicans.

Our entire way of life depends on them. AI will never replace them.

Once our entire agricultural system (here in the UK) was dependent on manual farm labourers, now we grow 60% of the calories we consume with 1.6% of the workforce.

> It's a nice dream, but the idea of AI and robots doing dishes, picking strawberries, washing cars, cooking meals will never happen.

If something can be automated at a lower cost than paying wages it eventually will be, automation is coming (arguably has been here since the industrial revolution) and it's not stopped yet.


Watch this - and tell me whats cheaper, robots or mexican slaves.

In a word, yes.

"Jobs" are not an end in themselves, and are decreasingly relevant in the information age.

Is this less impressive to you because he didn't 'create jobs'?

Self-funding this experiment is probably harder than creating 100 jobs.

I think his/her point is that just because the usual suspects aren't backing this venture, there's lot of negativity about the project here on HN.

Like Palmer Luckey of Oculus VR, I hope G Hotz has a similar story to tell at the end of it all.

> “I understand the state-of-the-art papers,” he says. “The math is simple. For the first time in my life, I’m like, ‘I know everything there is to know.’ ”

Yep, he's still in his twenties.

But that belief is enough to attempt something that more experienced people would hesitate to start.

Naivety is a very good thing at times.

I've seen average people achieve incredible things, and not because what they did was incredible... but just because they started work on things that no-one else thought they could complete. Some way into it, when enough progress has been made, people have rushed to give support because "halfway there but badly done" is a hell of a lot better than "not even started yet".

I don't disagree with that. I've worked with some very smart people in my 20s who sounded similar to Hotz -- enthusiastic, retrospectively naive about their understanding of a field, but above all, superbly intelligent. They did really great things, things that maybe didn't work perfectly or as envisioned, but still things that might scared off more experienced folks.

But also now that I am in my 30s, and they are as well, we frequently look back at that time and laugh about being that young. "Man, you were fun to work with, but also what were we thinking"

So I definitely wish Hotz all the luck. If nothing else, the more smart people working on the problem of self driving cars, the better.

My comment mostly stemmed from amusement of his quotes.

There have been people in my past that wanted to start a project that I didn't think they were capable of finishing, because either it was too large, they didn't have the skills/smarts (not that I thought they were stupid, just that I thought it would take exceptional intelligence), or both. A few of them succeeded, either in the original task, or the effort and journey was well worth the price paid.

Part of this was hubris. The thought of someone I considered less capable than myself accomplishing something I felt I could not damaged my ego. This was humbling.

Part of this was experience. The experience to know that attempting the hard or impossible is sometimes worth the effort, whether you succeed or not. This was educational.

Part of this was ambition. Ambition to do something new, to ignore the naysayers and noways when needed, and forge your own path, which I've always felt short on, but have steadily worked on over time. This is ongoing.

Another part of my problem is that I have too many projects I want to do. Learning about AI is one example, but I've instead done a series of web and mobile apps which are much closer to success. It would take a lot of time to read all the AI research and become good enough to tackle a problem like self-driving cars, and I've only got my spare time at home, with which I must also make sure my wife remains happy (ignoring her seems to make her unhappy for some reason) and keep my sanity (read fiction or play a video game some times) and take care of my house (the lawn just won't stay mowed).

I do remember being about 19 and thinking I was the best programmer in the world. By about 22 I had rewritten as much of my old code as I possibly could because it was so horrible. Somewhere between there and now I've gotten a cynical bit of humility to tamper my ego. I think the cynical part is that my ambition has not lessened, just my belief that I can succeed.

One Steve Jobs philosophy is focus and say no. I'm guessing I could do better if I said no to all but a single project.

A friend of mine in college had a very good saying about this that I always keep in mind:

"There's nothing like succeeding at something you weren't even qualified to attempt."

When you fail at something you're not qualified for, it doesn't feel like failure. You're able to get right back up without even bruising your ego.

Thanks for sharing.

I like this.

Sure. But that belief is a genuine worry when you are talking about creating something that'll move a hulk of metal down a road at 70mph.

if we don't have laws currently, we sure as hell need them. i can't imagine letting everybody try their own self-driving software.

Millions of drunk people drive on the roads. People can buy assault weapons, with no training and background checks. People who make self-driving software should be the least of our concerns.

There are laws against drunk driving (and harsh penalties for those that are caught), and you can't buy a firearm without a background check from a dealer (with more states requiring gun show dealers perform background checks now, too).

People building untested self driving cars is an entirely legitimate concern.

There aren't laws against driving while teenage:

"Nationally, 963,000 teen drivers were involved in police-reported motor vehicle crashes in 2013, which resulted in 383,000 injuries and 2,865 deaths"

I'd worry about that more than the odd geek with a laptop.

Legitimate: yes. Worthy of concern: absolutely not.

I don't think this would help at all because 1) most people are not interested in making their own self-driving car and 2) the small niche who is interested isn't going to worry about following the laws, as Hotz states in the piece.

From the article:

“I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”

Indeed, I would have a word with whoever made the 2nd law of thermodynamics.

This is just nitpicking. He is clearly talking about social laws, not fundamental laws of nature which are clearly different in scope and application.

So he'll feel morally okay if he kills or cripples someone by being reckless?

i wonder where he believes laws came from ...

From assholes. i.e. other people.

"Assholes", it would appear.

This was my first thought when I read the article. There ought to be some test track qualification before allowing a new system to be tested on the public road.

we already have enough laws. responsibility doesn't change because a computer does some thinking for you. Besides, how many people are actually writing their own self-driving software?

>Naivety is a very good thing at times.

Thats what I think 'foolish' means in "Stay hungry, stay foolish."

"Naivety is a very good thing at times."

He might not be naive.

>>I've seen average people achieve incredible things

In my opinion, we must never underestimate people.

>>but just because they started work on things that no-one else thought they could complete.

Nothing fails like smartness. The reason why a few people achieve the impossible while far more intelligent and smart people don't is because of the curse of intelligence makes them believe certain things are impossible.

The fool didn't know it was impossible, so he did it.

I would totally agree with you IF this kid hadn't proven his chops with iPhone and PS3 hacks, not to mention building a self-driving car in his garage.

I also realize this kid probably won't end up making a huge dent in the universe.... but.... statistically speaking, there should be several "Leonardo da Vinci"- level humans alive right now. Why not this kid?

> I would totally agree with you IF this kid hadn't proven his chops with iPhone and PS3 hacks, not to mention building a self-driving car in his garage.

Impressive as they are, his chops still don't support his claim to "know everything there is to know". The Dunning-Kruger effect is in full swing.

Sure, but that claim was made in the context of deep-learning networks. He went to work for an AI company, and realized that he knew -- from reading cutting-edge academic papers -- as much as the forefront of the field. He wasn't claiming to know everything there is to know in general, or even in software development, just that he can understand and implement machine learning with the best of them. Personally, I don't doubt that claim.

The field doesn't require a particularly extensive background either. A good grasp of linear algebra and multi-variable calculus basically has you set to understanding even the state of the art in the field. Of course, coming up with the papers would require a whole lot more work.

"I know everything there is to know."

I kind of took that to be like how Musk talks about needing to know first principles. In the article you can see that he was humble about what he thought he knew, took jobs here and there and eventually confirmed that he was at the cutting edge, that he knew 'everything there is to know' about this special area.

That's when he realized that he was qualified to try this. IMO, anyway ;)

On a smaller scale, I remember one day realizing that I, a self-taught programmer, knew more than my boss. Within six months I left to start my own company.

Nice. I'm still struggling to feel I know enough to do my own thing other than lead gen and optimization for others.

How smart was your boss and how did it go with your company ?

The boss was pretty smart in the sense of knowing how to work with big corporations to build large decision support systems. But his technical knowledge was fairly shallow.

I sold my first company and the investors did very well, but I made tons of stupid mistakes in the process. Not least of which was holding on to dotcom stock that I thought would go to the moon but which mostly went down the drain.

That statement is clearly tongue-in-cheek, come on. "I well-understand the cutting edge of this narrow research area" is less fun to say, but that's the meaning.

Typical twenty year olds don't read all the state-of-the-art papers on a subject before saying they know it all. It sounds more like he's caught up on the latest AI research and fundamentals.

If he really did read the papers then it's clear he would not say this. The papers aren't an end. They describe incremental progress. Having just returned from NIPS where most of the researchers are saying "We don't know" all the time, it is ironic.

Precisely. We have some stuff that works and we don't know why.

People in their twenties wrote those papers.

The math in any of the paper's he's most likely referring to aren't some theoretical pde maths or abstract algebraic geometry stuff... it's pretty understandable if you can grasp a "graduate level" linear algebra course.

That's true. My point was to respond to the breathless reporting about that this guy has achieved so young. He's replicating the work of other mostly young people. Doing it the first time is the trick.

(I'm very familiar with this literature - see my username)

To be fair, this is machine learning we are talking about, not algebraic topology. The experts in ML are still proud of the fact that they figured out the chain rule...

> The experts in ML are still proud of the fact that they figured out the chain rule...

I assume you're talking about using backpropagation with gradient descent. Backpropagation itself isn't all that interesting. The interesting part is that it works for practical problems and doesn't get stuck in shallow local minima.

Nevermind that they have no idea of the behavior of the partial derivatives nor attempts to model such when presenting their "latest and greatest", at least from most of the stuff I've read that's been posted here…

Its so much fun to see people who crossed their 20s doing nothing close to great, get so jealous.

I cringed. Saying "the math is simple" -- ouch. The writer must have been barely able to suppress his glee when that one popped out.

I don't doubt it at all. Keep in mind that "simple" is relative, we have to ask "simple compared to what?" For lots of people, I be that the math involved in these neural nets is the most complicated math they've ever done. They would never say it is simple, because they themselves barely grasp it. But in my experience, topics in mathematics have a funny way of becoming very simple the moment you "graduate" to thinking about a slightly more general mathematical framework.

Someone who has digested enough of the AI literature to think about the methods in aggregate is very likely to be in a position to see any particular method as a "simple" implementation of some more general set of principles.

As a general observation, what you say has some truth to it.

But the particular quote is referring to learning rates in autonomous robotics, especially visual classification in complex real-world scenes.

I have worked and published in ML since the early 1990s, was a program chair for the learning track at NIPS one year, participated in the same DARPA learning-to-drive program that Yann LeCun did, and don't consider the math behind "state-of-the-art papers" to be simple.

Just taking deep learning: there are a lot of tricks and recipes (e.g., rectified-linear activations, number of layers, staged training) that are not mathematically understood. It's exciting, but mathematically still a jungle. Just because a neophyte can code and optimize a network does not mean that the math that explains why it actually works is simple. As engineers, we need to understand why it works before using it in a safety-critical situation.

While a good point that simple is relative, if you specifically look at deep neural networks we don't understand how training a non convex function converges with gradient descent - the fundamental component to create a usable model. In practice, it often works, and there are a few intuitions why this works. But its naive at best to claim that this is simple. If it was we would understand it better :)

While I don't know a lot about the subject, I would bet that's likely right. As in, there are very hard problems, but the actual mathematics are not all that hard.

The math to implement a working neural net is indeed simple. Even if you consider all the commonly used engineering practices to ensure its correctness and improve its accuracy (like dealing with under/over-fitting), it's still not that hard. In the end, it's just doing multiplications over matrices, calculating derivatives and propagating values back and forth.

Now, to understand WHY the algorithms work, and gives you the result it claims to calculate, is quite hard, but that understanding is not required to implement those algorithms.

Well, a lot of people on HN certainly comment on technology without immodesty and an air of authority. It does feel boorish to have someone say that out-loud though.

I'd typically tend to agree with you (that a guy in his 20's saying he knows everything is ludicrous), but geohot is an actual savant.

I take that to mean that for the first time in his life he read the papers and fully understood them without needing additional background, not that there isn't more to learn outside of those papers.

He may well know everything there is to know today, but there's bound to plenty more breakthroughs in AI research. It would be like Newton saying "I know everything there is to know about physics today".

Nobody created anything great by first fully appreciating the size and difficulty of their endeavor. I would say, underestimating a problem, and overestimating one's skills are crucial to innovation and progress.

or just legitimately an expert on the field.

> I know everything there is to know.

If only he knew about the Dunning Kruger effect...

I wish I could upvote this more. A person in their 20's knows nothing, but thinks they've outsmarted the world. It's not until your 30's that you realize how big of an idiot you were/are and how much of the world you actually understand (read: little).

This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment needs to stop. I agree that many younger people think they understand more than they do but that's just part of growing up and we all go through it.

You are misunderstanding the sentiment. No one is saying 30 year olds know everything, they are saying 30 year olds _realize_ they don't know everything.

Or as Socrates put it, "the only true wisdom is in knowing you know nothing"

(probably one of the more Buddhist-ish gems from Western philosophy)

Why does it need to stop if you agree that its a fact of life. No one is saying 30 y/o's are enlighted, just that they have a bit more perspective. The same can be said for twenties vs. teens. It's not that teens are idiots - they are just teens with the life-experiences & perspective of a teen


> A person in their 20's knows nothing, but thinks they've outsmarted the world...

Is a dangerous and gross generalization. I totally agree with the changing of perspectives point, but feel that this community has a very clear bias from the older gen (30s and up) against the younger gen (teens and twenties). That's all I'm saying. It's divisive. Instead of saying they "know nothing", it should be phrased, "still have a lot to learn."

That is just a semantics argument.

The semantics are relevant though as it highlights the sentiment I'm trying trying to shed light on in my original comment.

I dont believe that's accurate though, saying people in their 20's know nothing, or people in their 20's still have a lot to learn is just a way of restating the same fact. But that statement doesn't mean that people in their 30's are enlightened or smarter, only that they now understand how much they don't know.

> This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment


Age is shorthand for "I've spent X years making a lot of mistakes and learning from them."

Watch Geohot do a CTF live: https://m.youtube.com/watch?v=aZJM-iIpbqc

I think the point you are making is generally valid... But he is a savant. I don't think it is wise to apply generalities to him.

Yes, age will change some of his sharper edges, but he is already pretty unusual.

I took his meaning to say that with all the stuff about technology and AI, he's at a point where he feels like he can start innovating because he's learned enough (it says he went back to school to get his PhD and worked at an AI company before quitting to work on the car), hence the reason why he feels so sure that his technology is better than Mobileyes.

I don't think this has anything to do with saying he knows everything he needs to know in the world.

I'm not disagreeing, but it's in bad taste to make a personal judgement on someone without meeting them personally. The guy could be cocky, or the writer of the article could've just made the guy appear cocky.

Anyways, wouldn't you agree that it is better to be empathetic rather than thinking you're an idiot?

...said a guy in his 30's

Busted. Which is why I didn't say shit about what it's like to be in your 40's because ... I have no idea.

Oh it's horrible, you've seen whole cycles so know how even the good things you could do next go bad in the end. It's easy to fall into excessive cynicism, plus stop learning new stuff because of the 30s lesson of how hard it really is to learn in full.

To be honest, I recommend faking to yourself that you're in your 20s still :) Much healthier attitude.

Isaac Newton invented calculus in his early 20s

He also nearly killed himself with alchemy experiments. He was very right about some things, and very wrong about others.

And he started doing this in his 30's/40's when he had stopped contributing to physics. Maybe you grow more senile the more you age.

how many pioneering mods / hacks did you do in your 20's ?

Like most hard problems, it's easy to pick off the low-hanging fruit and claim that you have solution.

Self-driving cars (in some form or the other, under some loose definition of "self" and "driving") have been around since the 20s. But it still remains a vexing problem.

It is quite easy to program a car to stay between 2 cars and follow the car in front. It is quite another to have the same car drive on (a) a road without lane markings; (b) in adverse weather conditions (snow, anybody? Hotz should take the car to Tahoe); (c) in traffic anomalies (ambulance/cop approaching from behind; accident/debris in front; etc. etc.); and so on.

No offense to GeoHot, but I'd love to see his system work in rush-hour 101 traffic; or cross the Bay Bridge, where (coming to SF) the lanes merge arbitrarily.

The key challenges are not only to drive when there's traffic; but to also drive when there's NO traffic, because lane markings, etc. are practically nonexistent in many places.

Having said all that, I still admire his enthusiasm and drive(no pun intended). Tinker on!

TBH, since it's a training based system it's "just" a matter of making sure the training set is large enough, including the situations you mentioned (assuming the training method is robust, generalizes well, etc). I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on) -- and there are many of them I suspect (at least 100s?). Estimating you'd take about 1 hour between experiencing a tricky scenario while driving around, this should put the number of hours at something like 100,000+ -- not easy to come up with by himself (that's about 50+ years of driving 6 hours a day).

Mobileye is doing something interesting by curating the reliable parts of the dataset (e.g. they have curated databases of traffic signs for each region) -- again not something you could do own your own, and seemingly archaic (hence GeoHot's criticism), but if you can afford it can speed up the training significantly.

Tesla is a massive resource here because they already have a huge fleet of internet connected cars proving enough data to fill the aforementioned training set in a matter of days or months: let's estimate their fleet at 40,000 cars -- then they could fill that minimum dataset in less than a day, and in a month they might have a 100x safety margin. Of course, there's a big technical problem of relaying all that video (maybe they just relay prediction failures), but the data is there.

Another fundamental problem with exclusively hands-off training (and little optimal control theory, etc) is picking up bad habits from drivers -- even the best algorithms will have a hard time and be only about as good as a good driver in each scenario, in the best case -- since the training data is acting as a ground truth.

> I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on)

The problem is: there are new edgecases born every day.

Consider, for example, an accident where the cops have set up flares. How often do you come across one of those? Very rarely, I imagine. And even if you did come across it in your training set: how does the ML know that you are following the cops' signals, and not just randomly switching lanes? That the flares are a critical signal?

Good point, but if you consider the Tesla dataset... it's formidable. Every day they could collect data enough for ~55 years of driving a lot every day. Even if you never encountered this case, if it happens at all it's likely to be seen many times (probably 100+ in a few months) in that dataset. After driving cars have gone mainstream, this may start to be seen as a design problem by traffic agencies: they might standardize ways to deal with traffic a little more.

Ultimately as long as the cars driving autonomously is small enough and procedures change slowly enough you should be able to continously update the driving system.

But let me reinforce that a pure learning approach even with very large datasets may not be efficient as one would like -- the curation of signs is a good idea, and manually reviewing accidents and near misses (a highly human-intensive task) and perhaps flagging bad driving behavior (probably after some outlier screening, which can be good or bad) will be important to get it really good with the training-intensive approach (and not the top down optimal path planning and control approach).

EDIT: Mobileye CEO discusses some interesting design issues and manual validation (and shows they have lots of data, good sign) https://www.youtube.com/watch?v=kp3ik5f3-2c&feature=youtu.be...

> I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on) -- and there are many of them I suspect (at least 100s?).

It depends on what sensors are in use and how the environment affects them. I can't get into much detail unfortunately, but I have seen radar systems that use naive Bayes classifiers for target detection and classification. Those systems required large numbers of examples across a large, multi-dimensional space to work effectively. Target detection and identification is a trivial task compared to what the control system of an autonomous vehicle needs to handle.

what if a driver does a mistake, like running a red, and doesn't get a ticket?

who validates all this data?

attaching a dnn to a driver as a training set is a pipe dream, for now. maybe after we understand how our brain perceives time and build models of future outcomes, we could apply it to build better nn. For now, nn are just best used as classifiers in a controlled environment, not from an environment with unpredictable states.

and especially not in an environment with adversaries http://spectrum.ieee.org/cars-that-think/transportation/self...

The vulnerability to sensor error (adversarial or not) is certainly not exclusive to nn based approaches. I commented on the validation problem in the comment above and in another below, and one way to deal with it is simply manual validation (mainly for false positive elimination). Indeed this approach with dnns is already being employed by Mobileye, so I don't think it's a pipe dream.

Sensor failure or well characterized adversarial inputs are actually really easy to deal with -- they are very easy to simulate with a given dataset and self-validate using traditional techniques -- simply make one or more cameras fail (or receive spurious sigs) and verify the output.

It's a good point that probably all autonomous cars will need a contingency plan (probably human intervention and/or blind emergency stops) with non-zero probability -- even if you have a redundant network of cameras around your vehicle a critical number can and will occasionally fail (when you look at the fleet sizes that will be dealt with).


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact