Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The first person to hack the iPhone is building a self-driving car (bloomberg.com)
949 points by bcg1 on Dec 16, 2015 | hide | past | favorite | 447 comments


Prototypical case of the 80/20 rule. He has implemented the happy case. But that system is nothing people realistically would want to drive their cars.

What he did is impressive. But the results are not that outlandish for a talented person.

1) Hook up a computer to the CAN-Bus network of the car [1] and attach a bunch of sensor peripherals.

2) Drive around for some time and record everything to disk.

3) Implement some of the recent ideas from deep reinforcement learing [2,3]. For training, feed the system with the oberservations from test drives and reward actions that mimick the reactions of actual drivers.

In 2k lines of code he probably does not have a car model that can be used for path planning [4] (with tire slippage, etc.). So his system will make errors in emergency situations. Especially since the neural net has never experienced most emergencies and could not learn the appropriate reactions.

And guess what, emergency situations are the hard part. Driving on a freeway with visible lane markings is easy. German research projects autonomously drove on the Autobahn since the 80s [5]. Neural networks were used for the task since about the same time [6].

[1] http://www.instructables.com/id/Hack-your-vehicle-CAN-BUS-wi...

[2] http://arxiv.org/abs/1509.02971

[3] http://arxiv.org/abs/1504.00702

[4] http://www.rem2030.de/rem2030-wAssets/docs/downloads/07_Konf...

[5] https://en.wikipedia.org/wiki/Eureka_Prometheus_Project

[6] http://repository.cmu.edu/cgi/viewcontent.cgi?article=2874&c...


A project like this is extremely impressive. The guy deserves a lot of credit (and maybe some investment?). That's hacking in the truest sense.

The parent's checklist misses a bunch of things. For instance "1) Hookup a computer to the CAN-Bus network of the car". That alone is not trivial. It is trivial if you want to read the car's odometer, but good luck doing more than that. For instance, people are still trying to make sense of the reported battery cell voltages in the Nissan Leaf. All interesting features are not documented and require serious reverse-engineering. "Hooking up to the CAN-Bus" can easily become a task for a whole month, full-time. Not to mention that the most useful features for the self-driving part are probably not accessible by the CAN-Bus - people are still trying to unlock the doors of the aforementioned Nissan Leaf. Steering, acceleration and braking are unlikely to be on the CAN-Bus "2) Attach a bunch of peripherals" is also hand wavy, and same goes for the rest of the post.

It would be like dismissing SpaceX accomplishments by saying: "1) Build rocket frame. 2) build engine 3) Program flight software 4) Fill up the tanks with fuel 5) Push a big red button". The devil is in the details.

With that out of the way: if the events happened as described, this guy should be convicted of reckless "driving". Taking a prototype that had only started working a few hours prior to an actual test run in a freeway with other cars is insane. What about some simpler, more useful and less dangerous goal? Such as a lane-departure warning add-on for cars which lack that capability?

The article title is the worst part though. It's not "clever dude created a self-driving car prototype by himself". It is "Dude is taking on Tesla by himself". Which is bullshit.

EDIT: Fix typo.


It's only impressive to outsiders who aren't aware that this hasn't been new for a very long time and is reusing the work of others. There are tons of videos and documentation from amateurs and hobbyists hooking computers up to the canbus. In parallel to the tech community, the tuner/mod community have also been doing this on their own. It's been old news for years, led to many funny pranks and stunt hacks, culminating in charlie miller and chris valasek's media stunt last year.


How is it not impressive to take bits of knowledge from multiple domains (programming, instrumentation, electrical engineering, control laws, etc) and fuse them together into a single thing?

We all take from the good work of those around us. But how many people seriously do things with that work? Not many, and disdaining people who do so is not productive or, in my view, a good thing.


I think because the goal posts keep moving with technology. The number of people who have ever combined knowledge from multiple domains into a useful thing may be small relative to the general population, but it's been done. The first time it's impressive. Then others add different ideas and concepts. Then everyone can do it and it feels old.

We've also seen all the news from Google about their efforts and the pain points that they are experiencing. And this guy cobbles some stuff together and just puts it on the road. Most of us are not as smart as this guy, but that's just irresponsible. That just puts a bad taste in people's mouths.


>but that's just irresponsible

It's not like it's unsupervised. Is it any more dangerous than taking a learner human diver out in a car?


yes. a bug in the programm and it takes insane measures (i.e. breaking and steering hard right), salvaging that situation is impossible at higher speeds. its doubtful that a beginner would do such a thing, and even the attempt would take longer, giving the supervisor more time to interfere.


Isn't that the same for most self-driving technology? Computer vision toolsets aren't new. Obviously hooking up to a car's drive systems isn't new, full-size RC cars have been built for years for various reasons. None of the rangefinding hardware equipped on self-driving cars is novel.

Where's the sudden breakthrough? All of this is built on technology and work that came before it. The whole field. It probably only really started being worked on in earnest from a business context because big tech companies like Google had more money than they knew what to do with, and were willing to spend it on ventures with no likelihood of profit any time soon.


Everything that you have ever done in your life has been about reusing the work of others. When was the last time you mined your own copper ore and created your own wires, with a pickaxe you built yourself?

We have to draw the line somewhere.


Google's first Udacity class taught how to build a self-driving car. The basic algorithm is simple and produces a fairly safe vehicle. In no way should it (or other) have been tested on the freeway as described in the article, however.


"Prototypical case of the 80/20 rule. He has implemented the happy case. But that system is nothing people realistically would want to drive their cars."

I'm painfully aware of this. Ten years ago I ran one of the 2005 DARPA Grand Challenge teams. That's about what we produced with less than three full time equivalent people. We didn't have to handle other vehicles, but we did have to handle off-road conditions. Ours didn't make many mistakes, but it was very conservative and kept stopping to rescan its environment with a line-scanning LIDAR on a tilt head.

I'm scared of happy-case automatic driving implementations. Tesla went down that road and had to back up, removing some features. Cruise's PR indicates they were going that way, but they now realize that won't work.


I am really, really interested in the work your team did. Do you have links to your body of work that I could sift through?


www.overbot.com


> Guy builds a fucking self-driving car. By himself.

> Not that outlandish for a talented person

What planet are you living on? I don't know what you did today, but I played with some jquery animations. This guy drove around in a self driving car that he built himself. It doesn't solve for edge cases? Neither do 90% of CRUD apps. Holy shit.

Give some credit where credit is due. This is not an ordinary or average outcome.


Well, he didn't actually build the car. He built a system that operates the existing car's steering and speed controls. And comparing the proper software solution to a CRUD app undermines the work that's been done by the big players in the space for the past several years.


Well yes, he only built the software that operates a car without a person driving it, connected it to a car, and did it all by himself. One person.

My point was that what 99% of HackerNews does is likely nowhere near as interesting or as difficult, so when the top comments are all shitting on someone who did something that's actually pretty amazing, HackerNews can go to hell. I mean that from the bottom of my heart. I'm done here.


Is it so much to ask that people don't keep erroneously stating he built the car? The car he used was an off-the-shelf component.


What he did was extremely impressive. But he's up against really high expectations. People expect him to have made massive breakthroughs in self driving car technology. Against that expectation, it doesn't seem so impressive.


That is true. But getting a car to auto drive by a single guy in a month is really super human level.


Why is the first comment on HN minimizing this truly impressive project? Of course it's not perfect, he's ONE person.


Because this article reinforces our bizarre cultural notion that one person deserves all the credit for some innovation.

Nevermind the ridiculous amount of engineering that was required to build all the tools he's using and the order of magnitude more engineering required to make this a safe, mass producible product.

But nah, let's just praise the founder, allow him to get rich while we all do the dirty work.


This comment is depressingly cynical. This is probably the single best definition of "hacking", as the community often refers to it, that I've seen in a very long time. One guy starts working on something only the biggest companies in the world dare attempt, throws together a minimal prototype built on top of existing technology. Just look at the picture of it.

Claims of commercial viability or beating Tesla are a bit ridiculous, but this is pretty damn amazing.


I think it's a fair comment given his quote "I know everything there is to know" and the headline of the article claiming he's "building a self-driving car by himself". I've always thought the "hacker" community attributed value to sharing and building off other's work, but maybe times have changed.


Have you ever spoken to a journalist? It's their job to sell clicks with charged headlines and over-blown quotes. If they followed you for a day I promise they'd generate some equally stupid quotes.


Yes, that kind of journalism exists, but does it have a place here?


Seems obvious to me that the journalist was manipulating what he said which was that he is deeply familiar with the state of the art of AI tech.


To expand on that: '“I understand the state-of-the-art papers,” he says. “The math is simple"', which seems like an attitude of someone without a solid understanding of ML. But who knows, maybe he's figured out something the rest of the field hasn't...


If this is the "hacker ethos" then I want nothing to do with it:

> “I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”

> “ ‘If’ statements kill.”

> “I want power. Not power over people, but power over nature and the destiny of technology. I just want to know how it all works.”


Hotz is pretty eccentric, but he's also pretty incredible. While technology wouldn't progress very far if everyone was like Hotz, I also don't think it would get very far without people like Hotz.


Exactly. You can appreciate aspects of a person and the work they do without deifying them in their entirety. Linus Torvalds and weev (being an extreme example) fall into this category.


Agreed. I can appreciate people who can go heads down and get things done. Where it falls apart for me is when those people get deified--or deify themselves, like that last quote demonstrates. And when they demonstrate an unwillingness to collaborate.


Then may you get what you want.


He implemented methods from the literature using a sensor specifically designed for this application. Grad students do this for class projects.


It's not "one guy starts working on something only the biggest companies in the world dare attempt" though, it's something hundreds of people have been doing for years now. He's more of a media hacker than he is a car hacker.


You're just easily impressed, that's all :)


1. Make a working prototype

2. Impress investors

3. Hire others to finish the job

4. Profit

Tesla customers invest in Musk. Musk invests in Hotz. Hots invests in developers. Developers in researchers. We're all delegating until we find someone who can finish the job. We're investing in people to hire the right people.


You're assuming that Hotz then does nothing, and also Musk. You really think Musk is sitting around with all this free time not doing anything? You don't think it would all fall apart without the key people still in key roles?

It doesn't work this way. You just move up the value chain.


He didn't write the article, it's not his fault it comes off as cocky. He is tackling an impressive project on his own, and spitting in the face of corporations. He should be giving talks at DEFCON about this, teaching people how he did it.

Your comment screams a superiority complex, but I bet that you are actually a nice person in real-life. Hotz is doing good work, and everyone in the technical field is relying on work done decades before they were born.


He should give talks at defcon teaching people how he did it?

You mean like this one, from Defcon in 2013?

http://hackaday.com/2013/07/26/defcon-presenters-preview-hac...

Or The Defcon 19 How To CanBus Hack Workshop, that taught classes about this in 2011?

http://www.canbushack.com/blog/index.php?title=learn-to-canb...

Unfortunately for him, defcon prefers original content, not someone claiming credit for what has been demonstrated repeatedly in previous years.


He sure didn't write the article, but looking at him and what he's saying on the video gives me the same impression of cockiness. But I bet he's actually a nice person in real life. ;)

It's an impressive personal project, no doubt about that. It's however also important to recognize the difficulty of having a system that works in mass production and handles all kinds of situation. Like someone said earlier it's easy to have the car drive in clear day with very visible markers. The hard part is when it rains, when it's foggy, when things are less optimal, etc.

Once he get to that point he'll find that part to be a lot harder than what he's accomplished so far.


A good point. I did some HTML for my elementary school when I was a kid, and the local newspaper put me up as a 'whiz kid' on their front page. Not that anything I did was shockingly complicated in the slightest, even for the web of that era. Journalists hype stuff, that's nothing new.


Reminds of the quote from "The Social Network"

>> "If they had engineered a self-driving car, they would've engineered a self driving car".

You are just being absurdly accusative in your comment. That guy has done a self-driving car with nothing but tools available in the market.


Putting a prototype self-driving car on actual roads without understanding the difficulty of that project seems like a legitimate, substantive criticism, and I don't care how many people are involved in the project.


Now, that's a very valid criticism. I don't care about his personality, but testing the car on live roads is asinine. And, this journalist jumps in and excitedly plays up how he was afraid for his life, etc.

Yeah, instead of going after the sensationalism, how about you discourage him from endangering the lives of others who weren't given the opportunity to make such a stupid decision?


Yes, It's one person. WE GET IT.

This article is a hero worship piece about a guy rather than a story about the technology. It's like how you can't find an article about Theranos that isn't actually just a photo-shoot/celebrity worship article about its founder.


Early investors love genius superheros because lesser investors are willing to pay a premium to invest with companies with a superhero and a story.


The guy clearly is technically brilliant. But I was referring to the results. Not how the results were achieved.

In the video, he claims to want to achieve level 3 driving. Let's see how he can do the following under non-perfect conditions:

- switching lanes

- stopping at lights

- turning cornerns

- turning left corners in traffic

Then we can move on to the more difficult situations.


I feel as though there are improvements that can be done in the environment of driving;

Have all stop lights have a beacon that will tell cars their state, "the light is green on north south, red on east west"

Have exit signs have beacons as well.


Have you thought this out? What happens when someone hacks their own beacons for lulz? So then the beacons have to have public key cryptography. Now all of the firmware will need to be audited and kept updated. Will there be over the air updates? What if someone cracks or steals the key? It seems to me that a target as juicy as "getting control of the North American road network" would be worth a major national power throwing a significant fraction of its resources at it, so that inflates the computing power such devices will need.


What if someone would install their own traffic light right now?


That's immediately visible to people with eyeballs. The first sign that something is going wrong isn't going to be a car colliding with another car. It's going to be, "Hey, why are those kids installing a light with a step ladder?"

Have you seen a traffic light? They're pretty substantial. How long would it take for you to make one in a hackerspace?

Contrast this with hacking OTA updates for traffic beacons. You might not even have to change any atoms around to do your dirty deed. You might not even have to be there physically.


Why would it need all that if the only thing it's doing is simply announcing its state?


So, 'simply' invest in billions of dollars of infrastructure improvements that will serve less then one percent of vehicles on the road?


It's a custom on HN. Look what happened when DropBox was announced on HN: https://news.ycombinator.com/item?id=8863


One person can build something that starts a revolution. See Woz/Apple.

The real issue here is that self-driving cars are probably the wrong place for that to happen in AI. At best, a solo project creates a crappy prototype where there was no product before (again, see Woz/Apple). The expectation for driverless cars is too high – they need to be 100% good, because your life is on the line, not 80% good.

What's the AI project that would blow people away, even if it was a shadow of a working prototype? I think that's the real question.


I don't follow why AI vehicles need to be 100% good. Plain old human-driven vehicles sure aren't and we accept their utility as being worth the trade.


Imagine the day an AI vehicle causes an accident that otherwise would not have happened.

Even if AI cars are statistically better than humans on average, it's an issue of control. It's true that most accidents are avoidable and caused by human error, but most people are (perhaps overly) confident in their own ability to drive safely (this is also why people text and drive).


We, as the flawed beings we are, can't accept both giving up control and not getting guaranteed safety as a result.


In one word: liability.


In two words: actuarial tables


>What's the AI project that would blow people away, even if it was a shadow of a working prototype?

Robot that can build a better robot?


It's neat that one person did that. But debugging on-highway? Bad idea.

Finding a safe place to test an autonomous vehicle on a budget is hard, but not impossible. Our initial testing in 2004 was in a large unused Sun parking lot in Fremont.[1] (Sun got carried away with expansion plans, and started building a big facility there. They paved the parking lots and poured the building foundations, then stopped construction.) Later off-road testing was at the Woodside Horse Park. We also looked into testing at the Hollister off-road vehicle park, and discovered we could book a sizable area on a weekday for our exclusive use. We never used that, though. We'd also looked into using the old FMC tank test track in San Jose, but never found a good contact there.

[1] https://goo.gl/maps/8CZsJZ6SPbA2


Who cares about how many people built it? What matters is the end product, which is something that existed in the 80s.


Because "minimizing" is an emotional notion, and it's irrelevant. But providing a response to over-enthusiastic reception is informative if only because it presents the other side of the issue.

In other words, I don't care if this guy is painted as a genius or script kiddie. He's not relevant in my life, and I will forget about him a week later. However, the lessons about machine learning and engineering that I can find in thsi article is the reasons I subscribe to HN (yes, I don't really know shit about these topics, and don't have enough time to fill gaps with real sources), and this comment is the most informative, just because he tries to cover what the article didn't.


"Built a self driving car" set the bar way too high for what this guy did


I think it's fair given his mission to "crush Mobileye".


He built an impressive prototype, considering he hacked it together in a month.

This is just the start, not the end.


I'm afraid this may be also the end, more or less. From this point onwards, things get so much harder and more labor-intensive, that doing everything alone seems impossible.


I don't see why it would. Once you get enough base data you can start simulating the data from what you have, inputting different scenarios without actually encountering them IRL. Faking sensor input and randomizing should get it most of the way there.


When lives are involved handling edge cases is everything. The person stepping off a curb, the cyclist that falls in front of you, the car that weaves in its own lane and can't be used as a reference, traffic lights that are out of order, stop signs hidden by trees... and on and on. Mess one of these up while autonomous and severely injure someone and you're done.

Human drivers might only see one of these cases a month, or 6 months, but not driving over someone in that case is what is critical. Not saying it's an impossible task, but IMO it will require a lot more training data than humanly possible for one person to generate.


I have significant experience in faking sensor data (specifically radar), and can tell you from it that fake sensor data is terrible. There is way too much going on in the real world to accurately create sensor data without actually recording sensor data. That is, you can manufacture the situation for the sensor to capture much more effectively than you can manufacture the data from a model.

Even pseudo-faking like we were trying to do, wherein as generated signal is injected into actual, recorded background noise, is fraught with problems. Anybody who tries to develop a control system based solely on such data is in for a rude awakening when they try it for real for the first time.


Faking input is how most people test their buggy, crappy software. It rarely matches reality.


Ah the naivety. Just like 50s AI research :D

History keeps repeating


You're probably right (since pessimism and cynicism are pretty successful predictors whenever anyone is trying something bold), but we have nothing like enough information to know if he's done something revolutionary or not. As the article makes clear, he doesn't want to give too much away, so of course you're stuck with a vague summary which sounds like he's just done what any smart person skilled in the art would do.

General cynicism isn't really adding much to the conversation in my opinion, since almost everyone here probably knows this already, and too much cynicism can put people off starting projects and people starting projects is something we should cherish.

I do think your point about emergency situations is substantive though. Perhaps he is only planning for self-driving while supervised by humans, but his idea for training as described (become an uber driver) would not at all produce the kind of dataset that would assure me that I would be safe. I think a lot of training with advanced drivers in simulators where you can have crazy life threatening situations would be the absolute minimum. I'd be worried that bad habits picked up on the thousands of uber rides would kick in during an emergency rather than the couple of situations that would be feasible to train on in real life.


With Neural Nets, training the AI to handle emergencies will be all about exposing as many emergency situations as possible.

What's better about an AI powered by neural nets is that you could train an AI to go offroading.

Get enough data and you've got a model for dealing with a given situation. Google's biggest strides with OCR, Voice recognition, Spam filters and other AI tech early on came from its ability to gather a huge corpus of data.

The real challenge is two fold. Gathering data, and feeding the AI with inputs with data actually matters. This is the secret sauce that Hotz refers to in the article as the information he's not willing to disclose. That information will become commoditized in due time (like low-latency optimization for HFT), but it will take plenty of institutional money & experience (Google, Apple, Tesla, Ford, etc) to get it there.


Using neural nets to deal with emergencies runs you into the Anna Karenina problem - "All happy families are alike; each unhappy family is unhappy in its own way."

It's fairly easy to train, and verify a system for driving in well-behaved traffic. Unfortunately, the problem space of not-well-behaved traffic is far wider - and is very hard to gather enough data to train a system well.

What you're going to get is self-driving cars which handle 99% of driving just fine - and when they end up in emergencies, find the human 'driver' to have dozed off at the wheel. (All-in-all, their safety record might end up better then the status quo - but that's not a certainty.)


The trouble with using neural nets for safety-critical real-time systems is that it's really hard to do the necessary level of validation. You can't accurately predict how the system might react in totally novel or unexpected situations. Which isn't to say that human drivers handle those situations well, but most of the time they don't do something totally bizarre or dangerous.


Humans totally do things that are bizarre or dangerous when in shock, but we've come to accept that as a personal responsibility and a price the society has to bear.


We have millennia of experience in regards to estimating how people will react to various shock situations and what constitutes those situations. It's intuitive.

The reactions of NN to unusual stimuli are likely to be counter-intuitive at best and unpredictable at worst.


Because we can't re-engineer humans into rule-based automata. (And we probably shouldn't even if we could.)

Machines, on the other hand, are a different story.


Exactly. People may swerve and over-correct, causing their car to flip, for example.


Electronic stability control (standard in US passenger cars since 2012) has already mostly solved that problem. http://www.safercar.gov/Vehicle+Shoppers/Rollover/Electronic...


Human error when driving a vehicle is one of the top causes of premature death globally. That is what we should be measuring the technology against, not perfection.

It seems that the technology has already, or is very close to approaching human levels of proficiency on the road. If specific use cases (offroad, snowpacked road etc.) are problematic, they can be limited or prohibited in the mean time.


Doesn't it seem possible that we could start testing the cars in a simulated environment?


Simulated environments aren't accurate enough (inputs are too clean, other drivers don't act real, etc) and would end up training the software to do the wrong things. A more reasonable approach would be to record the activities of multiple safe human drivers across a wide range of situations and then train the software to act like them.


He said for testing. Not for training the neural network. But just seeing how it behaves in various situations, to find it's flaws, and see if it's ready for the road.


Sure, but the real world is 100x more complex than simulation.


Have you built deep learning models before? Neural nets are not magical boxes that you stick data into and instantly get great, generalized, robust models at the other end.

I have no inside knowledge, but I would be very surprised if Google's self-driving cars use neural nets as a significant component right now (which isn't to say there aren't people exploring its use).


>He has implemented the happy case.

I could probably do this using ROS, opencv, and pcl. At least on a level where the car could recognize the road well enough to drive on it, but I imagine both my car and his car are nothing that any sane human would want to sit in. That last 20% focusing on safety and edge cases is going to be 100x the work/innovation/testing/staff/code/talent/smarts here.

As a side note, I am intrigued by the idea of a FOSS self-driving car. Its a little worrysome we'll never see the code Tesla, Mercedes, etc are using.


I don't really see your complaint here. He did build A self-driving car, not THE self-driving car. Its an impressive hack as you noted, maybe it can turn into something bigger with more time and energy. So what is the point of shouting this down with an "ITS INCOMPLETE", it wasn't as if this is a KickStarter promotion or even a product. Geez. Get back to hacking.


It's more of a lane-assist in good conditions, but not expected to navigate city streets or handle unexpected conditions.

So yes, it's an amazing project for a single person, but it's not really a self-driving car.


>Prototypical case of the 80/20 rule.

Could you explain on what basis you claim this? Do you have intimate knowledge of his prototype, the amount of work he put in, or the novel ideas that he brought to the project in addition to integrating pre-existing tech?

From what I can understand, your argument seems to be "lets see if I can guess what he did". If you're an authority in this field, then your guess could be very accurate I suppose.


That's the feeling I got from the video too. Maybe he tried too hard to make it appear as 'this is not as hard as big corps say it is!' but it also felt like 'hey, ML + basic CAN controls = self driving !'.. then I disagree. I want a computer with some general knowledge of physics + ML, not just abstracted drivers pattern on self play.


“We’ve figured out how to phrase the driving problem in ways compatible with deep learning,” Hotz says.

OK, maybe it's BS, but he's not saying what you say he's saying.


Why is deep learning this magic pixie dust you sprinkle on anything and it works? Have the people who are suggesting this actually gotten deep reinforcement learning to work on complex, long-time-frame, real-world continuous control problems before?

I would be very surprised if you got deep reinforcement learning to perform well on a self-driving task, even on a highway. If you did, well, your faculty position at Stanford is waiting for you.


they're very powerful calssificators and from the outside it seems they can learn to distinguish between arbitrarily complex states.

more states? add neurons! more search space? add layers!

except they have unpredictable resonances, especially multi-layer networks:

http://www.i-programmer.info/news/105-artificial-intelligenc...

they're just starting to understand this, but I believe the myth of the 'do it all dnn' is gonna die. it's time to start thinking about cluster of independent neural network, supervising each an independent aspect of the search space and/or each other.


It's pretty cool that it can be done on the cheap, though. I imagine a lot of people would be willing to pay a couple hundred dollars to retrofit their car to get autosteer alongside their cruise control feature.

Small personal example: my family lives out in the suburbs. My dad works in a neighboring city. His commute is about a half hour, 20 minutes of which is a straight shot on a major highway. I'm sure he'd be willing to pay a few hundred to reclaim that 20 minutes each way to read a book/the newspaper, check his email, browse the internet, etc.

It'd also be good for road trips.


I certainly wouldn't want to add amateur autosteer to my car, or accept the responsibility that comes from hacking my own self-driving car. The big manufacturers will accept liability for their systems -- build your own (or hack a factory system), and you're on your own, personal auto insurance may not even cover you since you weren't driving.


On the "cheap" relatively. The sensor he uses on the top of the car alone costs $8000. If you want to do it right, you'd also need a really nice IMU system to... I'm not sure what he's using but they can get very pricey.


Do you have a link for the best (commercial) IMUs around/how much they cost? I'm curious -- are they just clusters of MEMs like the ones on a phone or something more advanced, like interferometry based?


Define "best". We've used a quarter million dollar one at my current company, and at a previous job we spent far, far more than that for military airframes.

Groves "Principles of GNSS, Inertial, and Multisensor Navigation Systems" contains a good description of the various technologies used and their accuracy limits for navigation.

Consumer grade MEMS are fine for airbags, the pedometer in your phone, but are not sufficient for intertial navigation, even when aided with other sources. At around the $2K-30K you get systems that can provide accurate navigation for up to 2 minutes or so. They are used in things like missiles.

Aviation grade IMUs need to meet SNU 84 standard, which requires a maximum horizontal position drift of 1.5km in the first hour of operation. These will run 100K and up. Marine grade (subs, rockets, ICBMs) run $1 million and up, and have a maximum drift of 1.8km per day.

None of them are good enough for autonomous cars w/o sensor fusion.


I haven't paid attention to IMUs in a while. Here is one that I had some experience with in grad school:

http://www.oxts.com/inertial-interface-to-navcoms-sf-3050/


Even when the technology for self-driving cars is developed well enough for widespread use, it won't be practical or cost-effective to retrofit existing vehicles. By the time you strip everything down, cut holes, install sensors, run cables, etc it will be cheaper to just buy a new car.


I don't know if I'd be comfortable completely giving up control to a driving computer, even on a straight highway.

I am totally for a computer telling me when I'm driving dangerously because I'm distracted or tired.


Throughout history there are many cases of the lone tinkerer who achieves the breakthrough going up against much better funded adversaries.

Take the case of the Wright Brothers who faced two well funded adversaries. Samuel Pierpont Langley had a chair at Harvard, worked at the Smithsonian and had among others funding him $50K from the US War Department. Alexander Graham Bell, the inventor of the telephone, was an avid aviation enthusiast and an already wealthy man. One of Bell's assistants was Glenn Curtiss who went on to found his own plane company.

Who would bet on two bicycle mechanics from Dayton, Ohio? No one, yet they were the first to fly.

The first popular microcomputer would surely come from IBM or HP yet it didn't. Two guys in a Cupertino garage built it and neither of them was a college graduate.

This guy may fail but I am not going to bet against him. In fact I hope they televise the race between the Comma and the Tesla. I'll bring the popcorn.


Tracing the Wright's development process, it's the first example I know of of a directed research and development program. The Wrights formulated a clear goal, identified the problems needing solutions, developed a series of prototypes aimed at proving each solution, did laboratory experiments to resolve others, invented physical theories to resolve still more, carefully documented their progress, etc.

I.e. they did much more than simply throw some ideas and parts together and see what stuck, like every other contemporary experimenter.


I agree, but the Wright's went counter to common thought at the time. Like Peter Thiel's favorite question, what do you believe that few else do?

Ever since I played around with Prolog in the nineties I came to believe, just like digital eventually triumphed over analog, that neural networks will eventually triumph over rule sets based software. I did not know when it will become apparent, but I firmly believe that it is coming.


Great observations and references @jpfr.

Learning for the AI does not have to be from real world experiences only. Simulated/controlled emergency situations would help as well! Further even if the 2K lines of code stretches a bit more to deal with unknown situations that isn't so bad either


But this is a fundamental problem: The learning approach might need 100s of examples of drivers reacting to a bicycle on a sidewalk while turning right into a parking lot to get the right training input. Or perhaps it can learn from examples of bicycles and sidewalks and driveways to do the right thing. The point is, there are millions of edge cases, so getting examples of them all for training or verification is a very large task. The alternative is to build a more general world model where it's possible to work from the other direction and gain confidence that yes, the car senses all other obstacles correctly, and yes, it has algorithms that attempt to eliminate collisions in any circumstances. That's a fairly different approach, which ends up being much heavier in terms of effort and investment.


Make the AI play grand theft auto for many thousands of hours.


I would argue that you're half there. You'd have a car that could navigate roads, but ultimately it couldn't get you to where you want to go.

Here has been dedicated on making maps at a quality level that is needed for automous cars. A few issues with the data that is available currently is that the data isn't very detailed, and you're at the mercy of volunteers (tiger(old), openmaps data), or for a company that's main focus isn't maps. (Google)

http://360.here.com/2015/06/02/take-create-autonomous-cars/


I agree this is 80/20 complete at the moment, but the gripes you have are not insurmountable if his model can truly learn with proper inputs.

What if you could simulate these conditions in a safe/controlled environment, and remove the driver from harm via remote control? Maybe build a virtual world that simulates the inputs as best as possible. That would be the cheap way, although you may lose fidelity.

If you had enough money you could build a simulated town/city, similar to a movie set, that throws all possible dangerous scenarios at you and operate the car remotely through these scenarios.


Path planning shouldn't require a ton of lines of code, really. I've seen in-use path planning and localization in the sub<2K LOC range.


In 2007 for the DARPA Urban Challenge, the Benn Franklin Racing Team used Matlab for their car. The entire thing ran on 5000 lines of code compared to similar performing cars written in C/C++ which used over 100K lines of code.

http://velodynelidar.com/lidar/hdlpressroom/pdf/Articles/The...


Well that makes sense, given that basically all machine learning is transformations over matrices and that is Matlab's bread and butter. The equivalent C code might perform better when optimized, but it is going to be far longer and uglier. There's a reason a lot of ML work is prototyped out in Matlab first.


I would say basically all of robotics is transformations over matrices. As for Little Ben, there was actually no machine learning involved. Planning was sample-based on an occupancy grid. Localization was map-based.


That is true, and was very impressive. Consider how expressive matlab is though - x = A\b is one line in a .m file, but can correspond to several hundred of lines of fortran.


Why take someone down when what they've done is awesome?! Just suggest improvements, be constructive.


I know it's far from production ready. But he's demonstrated an ability to put together quite a diverse collection of hardware and software and get it all to work together. I'm impressed.


I think most people would agree it's one thing to understand how the pieces work together "in theory", and an entirely different level actually building a functional prototype.


Completely agree. There's absolutely no way you could use this approach for level 4 autonomy. Level 3, fine, but not level 4.


In the video, he says his company is only targeting Level 3.


Ah I didn't see that. Fair enough. I still agree with the parent comment that the (important) edge cases like avoiding a crash will not be well handled potentially.


Can you use the Canbus to actuate?


doing complicated stuff with closed hardware is very impressive. controlling hardware is - hard.


I'm not confident we can argue Google or anyone else has done much better. You might notice Google has never announced testing their cars where snow occurs, for example.

I think it's likely that much more of self-driving car development is smoke and mirrors than people realize. Best case scenarios are promoted as examples of how innovative a company is. Great PR, not necessarily a practical result.


>I'm not confident we can argue Google or anyone else has done much better. You might notice Google has never announced testing their cars where snow occurs, for example.

This is a sensor limitation. They have fully admitted this several times (heavy rain too). Equating the fact that this guy can't handle any emergency situations with a sensor limitation all of the lidar systems suffer from is stupid.

Google has shown many times that they have logic to handle routes around obstructions, construction, etc as well as cars running red lights, pedestrians walking into the street, etc. At least read up on something before you call it smoke and mirrors.


I actually have read up on it before I called it smoke and mirrors. While this is a year ago now, this is well after their cars were heavily marketed as being pretty sufficient and capable of detecting problems. ...Yet it couldn't detect the existence of a stoplight if it wasn't explicitly mapped ahead of time. And apparently, according to a Googler, the mapping required to make a road work with Google's self-driving car system is "impractical" to do at a nationwide scale.

http://www.slate.com/articles/technology/technology/2014/10/...

I'm not saying it won't ever happen, I'm not saying there haven't been developments in the technology. But people seem to have a disconnect in expectations of where the technology is, and where marketing departments for these tech companies want you to believe the technology is.


But they did solve the problem with mapping. I wouldn't call that smoke and mirrors. The implementation is difficult, but at least it's a solution.


2) It's probably not a disk. I'd bet it's an SSD, or something. I doubt he'd use an HDD considering how relatively cheap it is now... and considering how loaded he probably is.


SSD can also stand for Solid State Disk. Disk does not imply moving parts.


No. It's only referred to that because people are stupid. It contains neither an actual disk nor a drive motor to spin a disk.

If SSD = Solid state disk, then HDD = Hard disk disk.


But it does imply a disc, as in a circular thing. SSDs are very square looking discs...


It's kind of like apps still using the floppy drive icon to indicting Save action.


SSD = solid state DISK


Actually no. It's only referred to that because people are stupid. It contains neither an actual disk nor a drive motor to spin a disk.

HDD = HARD DISK drive

SSD = SOLID STATE drive


I thought it was drive, not disk.


"His self-funded experiment could end with Hotz humbly going back to knock on Google’s door for a job."

The biggest thing here IMO is this is self-funded. Any startup trying to do what he is doing in this environment would have raised $50 Million, hired 100's of engineers from top notch schools, become accepted in YC, and have Marc Andreessen, Paul Graham, Sam Altman and all singing their praises.

Kudos to him for being self-funded.


Could not help thinking about the stark contrast between Hotz and the Theranos "entrepreneur": a. self-funded vs. VC friend funded b. demo-ing the product (try it and 'feel' it) early on vs. hiding behind a ton of marketing legalese


The funny thing is he's the type of person you'd want to put your VC behind.


And yes, that happened http://www.getcruise.com/


Ta - I was trying to remember who the YC startup were trying the same stuff. https://news.ycombinator.com/item?id=7933045

Seems they recently raised $15m http://techcrunch.com/2015/09/18/cruise-2/

Wonder how they compare tech wise to Geohot's thing


what's with making text so thin you can't even read?


I hope the design was originally with a different image. And then the image was changed, and it wasn't redesigned.


The text that isn't overlaying images is terrible too. It's too thin for subpixel rendering to look decent. There's not enough contrast for viewing on a TN LCD panel unless it's in the middle of the screen.


p { font-weight: 100; }


Oi. This is not the kind of thing I want kickstarted.

I'd prefer my autonomous cars to have gone through insane amounts of testing, regulation, etc. This is just too new of a field, and the amount of edge cases you have to handle is practically infinite.


While I understand where you're coming from, and even feel emotionally invested in the idea of bootstrapping, objectively speaking, it's a bad decision to stay self-funded. It is, after all, a business, and if you can accelerate your business' growth 100x by taking on some very smart outside investors and hire very smart people, why wouldn't you?


You might not because the goals of a founder and an investor are different.

Investors know that their returns are generated by a handful of super-successful companies. And so they have a natural pressure to "swing for the fences".

Founders have a tremendous amount tied up in THIS company, and are naturally risk-adverse.

So you get conflicts like the following. There is an initiative which has 20% chance of losing everything, but could double how much you make. Investors will always want to go for it. Founders reasonably may not.


A typical woodhead's thought. "Accelerate your business's growth". Hahaha. Hard things have to be done solo because explaining to others is slowwwwwwww.


Hard things have to be done solo because explaining to others is slowwwwwwww.

A million times this. I never really understood how hard it was to explain a (in my mind) simple new technology to the lay person until I had to do it. This is even after spending years as a technical briefer for high power executives.


What I was meaning is actually not about external investors or so. My point is, sometimes even putting more equally competent technical collaborators won't work; it's like digging a tunnel: the working surface is only that wide, an extra worker can do little more than staring at the working man's ass.


Because if all of that will distract you from actually developing the product. Granted this won't work for most people, but if you're extremely talented like geohot then it may not be a bad call.


Because creating a self-driving car is an extremely creativity-intensive exercise that demands "smartness"... but smartness doesn't add linearly (or, I could posit even monotonically). If 1 smart guy can produce 1 self-driving car in say 6 months, it doesn't mean 2 smart guys can produce a self-driving in 3 months. Once you have a bunch of people, 2nd order and third order interactions between us get complicated and managing that becomes its own time/money-sink.

As for money, yes, it can accelerate growth in its first-order effect; but it also induces stress and so threatens early exhaustion of your other precious resource: personal motivation.

So, as a crack-shot programmer, if you know with 90% certainty you can crank out a self-driving car in 6 months by yourself or fail, but only 20% certainty you can arrange a cohesive team with someone else's money to crank out a car in 1 month or fail (and alienate your team, and ruin your credit)... I would advise taking the 6 months route. Patience is a virtue and sometimes it's better not buying into every pot of snake-oil the SV hype machine wants to sell us.


Creating 1 job is better than hundreds?


Well, Hotz did state that, “The truth is that work as we know it in its modern form has not been around that long, and I kind of want to use AI to abolish it. I want to take everyone’s jobs. Most people would be happy with that, especially the ones who don’t like their jobs. Let’s free them of mental tedium and push that to machines. In the next 10 years, you’ll see a big segment of the human labor force fall away. In 25 years, AI will be able to do almost everything a human can do. The last people with jobs will be AI programmers.”


Yeah and the world will split in rich and poor people with poor starving.


What interests me about your argument is the assumption that the "poor starving" will just sit by and passively accept that.

The reason we don't have an insurrection on our hands now about wealth disparity is that while the wealth of the super wealthy has accelerated hugely so has the general living standard of the poor, if (when) the jobs go away that will no longer be the case and then you are talking about a brutal escalation into a full insurrection and while the technology and wealth will be on one side, the last 15 years in the middle east has shown what committed people with pickups and AK's can do against an on-paper massively superior opponent.

I just hope the super wealthy are smart enough to see this coming and avoid it, it would be spectacularly brutal.


Or nobody will ever have to work again.


Bullshit.

It's a nice dream, but the idea of AI and robots doing dishes, picking strawberries, washing cars, cooking meals will never happen.

The best AI cannot beat a population of Mexicans who are basically the glue that holds out modern society together.

If you wanted to see how the U.S. Will completely come to a screeching halt, it would be if the rapture took place and only claimed all Mexicans.

Our entire way of life depends on them. AI will never replace them.


Once our entire agricultural system (here in the UK) was dependent on manual farm labourers, now we grow 60% of the calories we consume with 1.6% of the workforce.

> It's a nice dream, but the idea of AI and robots doing dishes, picking strawberries, washing cars, cooking meals will never happen.

If something can be automated at a lower cost than paying wages it eventually will be, automation is coming (arguably has been here since the industrial revolution) and it's not stopped yet.


http://www.foodchainsfilm.com/

Watch this - and tell me whats cheaper, robots or mexican slaves.


In a word, yes.

"Jobs" are not an end in themselves, and are decreasingly relevant in the information age.


Is this less impressive to you because he didn't 'create jobs'?


Self-funding this experiment is probably harder than creating 100 jobs.


I think his/her point is that just because the usual suspects aren't backing this venture, there's lot of negativity about the project here on HN.

Like Palmer Luckey of Oculus VR, I hope G Hotz has a similar story to tell at the end of it all.


> “I understand the state-of-the-art papers,” he says. “The math is simple. For the first time in my life, I’m like, ‘I know everything there is to know.’ ”

Yep, he's still in his twenties.


But that belief is enough to attempt something that more experienced people would hesitate to start.

Naivety is a very good thing at times.

I've seen average people achieve incredible things, and not because what they did was incredible... but just because they started work on things that no-one else thought they could complete. Some way into it, when enough progress has been made, people have rushed to give support because "halfway there but badly done" is a hell of a lot better than "not even started yet".


I don't disagree with that. I've worked with some very smart people in my 20s who sounded similar to Hotz -- enthusiastic, retrospectively naive about their understanding of a field, but above all, superbly intelligent. They did really great things, things that maybe didn't work perfectly or as envisioned, but still things that might scared off more experienced folks.

But also now that I am in my 30s, and they are as well, we frequently look back at that time and laugh about being that young. "Man, you were fun to work with, but also what were we thinking"

So I definitely wish Hotz all the luck. If nothing else, the more smart people working on the problem of self driving cars, the better.

My comment mostly stemmed from amusement of his quotes.


There have been people in my past that wanted to start a project that I didn't think they were capable of finishing, because either it was too large, they didn't have the skills/smarts (not that I thought they were stupid, just that I thought it would take exceptional intelligence), or both. A few of them succeeded, either in the original task, or the effort and journey was well worth the price paid.

Part of this was hubris. The thought of someone I considered less capable than myself accomplishing something I felt I could not damaged my ego. This was humbling.

Part of this was experience. The experience to know that attempting the hard or impossible is sometimes worth the effort, whether you succeed or not. This was educational.

Part of this was ambition. Ambition to do something new, to ignore the naysayers and noways when needed, and forge your own path, which I've always felt short on, but have steadily worked on over time. This is ongoing.


Another part of my problem is that I have too many projects I want to do. Learning about AI is one example, but I've instead done a series of web and mobile apps which are much closer to success. It would take a lot of time to read all the AI research and become good enough to tackle a problem like self-driving cars, and I've only got my spare time at home, with which I must also make sure my wife remains happy (ignoring her seems to make her unhappy for some reason) and keep my sanity (read fiction or play a video game some times) and take care of my house (the lawn just won't stay mowed).

I do remember being about 19 and thinking I was the best programmer in the world. By about 22 I had rewritten as much of my old code as I possibly could because it was so horrible. Somewhere between there and now I've gotten a cynical bit of humility to tamper my ego. I think the cynical part is that my ambition has not lessened, just my belief that I can succeed.

One Steve Jobs philosophy is focus and say no. I'm guessing I could do better if I said no to all but a single project.


A friend of mine in college had a very good saying about this that I always keep in mind:

"There's nothing like succeeding at something you weren't even qualified to attempt."


When you fail at something you're not qualified for, it doesn't feel like failure. You're able to get right back up without even bruising your ego.

Thanks for sharing.


I like this.


Sure. But that belief is a genuine worry when you are talking about creating something that'll move a hulk of metal down a road at 70mph.


if we don't have laws currently, we sure as hell need them. i can't imagine letting everybody try their own self-driving software.


Millions of drunk people drive on the roads. People can buy assault weapons, with no training and background checks. People who make self-driving software should be the least of our concerns.


There are laws against drunk driving (and harsh penalties for those that are caught), and you can't buy a firearm without a background check from a dealer (with more states requiring gun show dealers perform background checks now, too).

People building untested self driving cars is an entirely legitimate concern.


There aren't laws against driving while teenage:

"Nationally, 963,000 teen drivers were involved in police-reported motor vehicle crashes in 2013, which resulted in 383,000 injuries and 2,865 deaths"

I'd worry about that more than the odd geek with a laptop.


Legitimate: yes. Worthy of concern: absolutely not.


I don't think this would help at all because 1) most people are not interested in making their own self-driving car and 2) the small niche who is interested isn't going to worry about following the laws, as Hotz states in the piece.


From the article:

“I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”


Indeed, I would have a word with whoever made the 2nd law of thermodynamics.


This is just nitpicking. He is clearly talking about social laws, not fundamental laws of nature which are clearly different in scope and application.


So he'll feel morally okay if he kills or cripples someone by being reckless?


i wonder where he believes laws came from ...


From assholes. i.e. other people.


"Assholes", it would appear.


This was my first thought when I read the article. There ought to be some test track qualification before allowing a new system to be tested on the public road.


we already have enough laws. responsibility doesn't change because a computer does some thinking for you. Besides, how many people are actually writing their own self-driving software?


>Naivety is a very good thing at times.

Thats what I think 'foolish' means in "Stay hungry, stay foolish."


"Naivety is a very good thing at times."

He might not be naive.


>>I've seen average people achieve incredible things

In my opinion, we must never underestimate people.

>>but just because they started work on things that no-one else thought they could complete.

Nothing fails like smartness. The reason why a few people achieve the impossible while far more intelligent and smart people don't is because of the curse of intelligence makes them believe certain things are impossible.

The fool didn't know it was impossible, so he did it.


I would totally agree with you IF this kid hadn't proven his chops with iPhone and PS3 hacks, not to mention building a self-driving car in his garage.

I also realize this kid probably won't end up making a huge dent in the universe.... but.... statistically speaking, there should be several "Leonardo da Vinci"- level humans alive right now. Why not this kid?


> I would totally agree with you IF this kid hadn't proven his chops with iPhone and PS3 hacks, not to mention building a self-driving car in his garage.

Impressive as they are, his chops still don't support his claim to "know everything there is to know". The Dunning-Kruger effect is in full swing.


Sure, but that claim was made in the context of deep-learning networks. He went to work for an AI company, and realized that he knew -- from reading cutting-edge academic papers -- as much as the forefront of the field. He wasn't claiming to know everything there is to know in general, or even in software development, just that he can understand and implement machine learning with the best of them. Personally, I don't doubt that claim.


The field doesn't require a particularly extensive background either. A good grasp of linear algebra and multi-variable calculus basically has you set to understanding even the state of the art in the field. Of course, coming up with the papers would require a whole lot more work.


"I know everything there is to know."

I kind of took that to be like how Musk talks about needing to know first principles. In the article you can see that he was humble about what he thought he knew, took jobs here and there and eventually confirmed that he was at the cutting edge, that he knew 'everything there is to know' about this special area.

That's when he realized that he was qualified to try this. IMO, anyway ;)


On a smaller scale, I remember one day realizing that I, a self-taught programmer, knew more than my boss. Within six months I left to start my own company.


Nice. I'm still struggling to feel I know enough to do my own thing other than lead gen and optimization for others.


How smart was your boss and how did it go with your company ?


The boss was pretty smart in the sense of knowing how to work with big corporations to build large decision support systems. But his technical knowledge was fairly shallow.

I sold my first company and the investors did very well, but I made tons of stupid mistakes in the process. Not least of which was holding on to dotcom stock that I thought would go to the moon but which mostly went down the drain.


That statement is clearly tongue-in-cheek, come on. "I well-understand the cutting edge of this narrow research area" is less fun to say, but that's the meaning.


Typical twenty year olds don't read all the state-of-the-art papers on a subject before saying they know it all. It sounds more like he's caught up on the latest AI research and fundamentals.


If he really did read the papers then it's clear he would not say this. The papers aren't an end. They describe incremental progress. Having just returned from NIPS where most of the researchers are saying "We don't know" all the time, it is ironic.


Precisely. We have some stuff that works and we don't know why.


People in their twenties wrote those papers.


The math in any of the paper's he's most likely referring to aren't some theoretical pde maths or abstract algebraic geometry stuff... it's pretty understandable if you can grasp a "graduate level" linear algebra course.


That's true. My point was to respond to the breathless reporting about that this guy has achieved so young. He's replicating the work of other mostly young people. Doing it the first time is the trick.

(I'm very familiar with this literature - see my username)


To be fair, this is machine learning we are talking about, not algebraic topology. The experts in ML are still proud of the fact that they figured out the chain rule...


> The experts in ML are still proud of the fact that they figured out the chain rule...

I assume you're talking about using backpropagation with gradient descent. Backpropagation itself isn't all that interesting. The interesting part is that it works for practical problems and doesn't get stuck in shallow local minima.


Nevermind that they have no idea of the behavior of the partial derivatives nor attempts to model such when presenting their "latest and greatest", at least from most of the stuff I've read that's been posted here…


Its so much fun to see people who crossed their 20s doing nothing close to great, get so jealous.


I cringed. Saying "the math is simple" -- ouch. The writer must have been barely able to suppress his glee when that one popped out.


I don't doubt it at all. Keep in mind that "simple" is relative, we have to ask "simple compared to what?" For lots of people, I be that the math involved in these neural nets is the most complicated math they've ever done. They would never say it is simple, because they themselves barely grasp it. But in my experience, topics in mathematics have a funny way of becoming very simple the moment you "graduate" to thinking about a slightly more general mathematical framework.

Someone who has digested enough of the AI literature to think about the methods in aggregate is very likely to be in a position to see any particular method as a "simple" implementation of some more general set of principles.


As a general observation, what you say has some truth to it.

But the particular quote is referring to learning rates in autonomous robotics, especially visual classification in complex real-world scenes.

I have worked and published in ML since the early 1990s, was a program chair for the learning track at NIPS one year, participated in the same DARPA learning-to-drive program that Yann LeCun did, and don't consider the math behind "state-of-the-art papers" to be simple.

Just taking deep learning: there are a lot of tricks and recipes (e.g., rectified-linear activations, number of layers, staged training) that are not mathematically understood. It's exciting, but mathematically still a jungle. Just because a neophyte can code and optimize a network does not mean that the math that explains why it actually works is simple. As engineers, we need to understand why it works before using it in a safety-critical situation.


While a good point that simple is relative, if you specifically look at deep neural networks we don't understand how training a non convex function converges with gradient descent - the fundamental component to create a usable model. In practice, it often works, and there are a few intuitions why this works. But its naive at best to claim that this is simple. If it was we would understand it better :)


While I don't know a lot about the subject, I would bet that's likely right. As in, there are very hard problems, but the actual mathematics are not all that hard.


The math to implement a working neural net is indeed simple. Even if you consider all the commonly used engineering practices to ensure its correctness and improve its accuracy (like dealing with under/over-fitting), it's still not that hard. In the end, it's just doing multiplications over matrices, calculating derivatives and propagating values back and forth.

Now, to understand WHY the algorithms work, and gives you the result it claims to calculate, is quite hard, but that understanding is not required to implement those algorithms.


Well, a lot of people on HN certainly comment on technology without immodesty and an air of authority. It does feel boorish to have someone say that out-loud though.


I'd typically tend to agree with you (that a guy in his 20's saying he knows everything is ludicrous), but geohot is an actual savant.


I take that to mean that for the first time in his life he read the papers and fully understood them without needing additional background, not that there isn't more to learn outside of those papers.


He may well know everything there is to know today, but there's bound to plenty more breakthroughs in AI research. It would be like Newton saying "I know everything there is to know about physics today".


Nobody created anything great by first fully appreciating the size and difficulty of their endeavor. I would say, underestimating a problem, and overestimating one's skills are crucial to innovation and progress.


or just legitimately an expert on the field.


> I know everything there is to know.

If only he knew about the Dunning Kruger effect...


I wish I could upvote this more. A person in their 20's knows nothing, but thinks they've outsmarted the world. It's not until your 30's that you realize how big of an idiot you were/are and how much of the world you actually understand (read: little).


This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment needs to stop. I agree that many younger people think they understand more than they do but that's just part of growing up and we all go through it.


You are misunderstanding the sentiment. No one is saying 30 year olds know everything, they are saying 30 year olds _realize_ they don't know everything.


Or as Socrates put it, "the only true wisdom is in knowing you know nothing"

(probably one of the more Buddhist-ish gems from Western philosophy)


Why does it need to stop if you agree that its a fact of life. No one is saying 30 y/o's are enlighted, just that they have a bit more perspective. The same can be said for twenties vs. teens. It's not that teens are idiots - they are just teens with the life-experiences & perspective of a teen


This:

> A person in their 20's knows nothing, but thinks they've outsmarted the world...

Is a dangerous and gross generalization. I totally agree with the changing of perspectives point, but feel that this community has a very clear bias from the older gen (30s and up) against the younger gen (teens and twenties). That's all I'm saying. It's divisive. Instead of saying they "know nothing", it should be phrased, "still have a lot to learn."


That is just a semantics argument.


The semantics are relevant though as it highlights the sentiment I'm trying trying to shed light on in my original comment.


I dont believe that's accurate though, saying people in their 20's know nothing, or people in their 20's still have a lot to learn is just a way of restating the same fact. But that statement doesn't mean that people in their 30's are enlightened or smarter, only that they now understand how much they don't know.

> This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment


[deleted]


Age is shorthand for "I've spent X years making a lot of mistakes and learning from them."


Watch Geohot do a CTF live: https://m.youtube.com/watch?v=aZJM-iIpbqc

I think the point you are making is generally valid... But he is a savant. I don't think it is wise to apply generalities to him.

Yes, age will change some of his sharper edges, but he is already pretty unusual.


I took his meaning to say that with all the stuff about technology and AI, he's at a point where he feels like he can start innovating because he's learned enough (it says he went back to school to get his PhD and worked at an AI company before quitting to work on the car), hence the reason why he feels so sure that his technology is better than Mobileyes.

I don't think this has anything to do with saying he knows everything he needs to know in the world.


I'm not disagreeing, but it's in bad taste to make a personal judgement on someone without meeting them personally. The guy could be cocky, or the writer of the article could've just made the guy appear cocky.

Anyways, wouldn't you agree that it is better to be empathetic rather than thinking you're an idiot?


...said a guy in his 30's


Busted. Which is why I didn't say shit about what it's like to be in your 40's because ... I have no idea.


Oh it's horrible, you've seen whole cycles so know how even the good things you could do next go bad in the end. It's easy to fall into excessive cynicism, plus stop learning new stuff because of the 30s lesson of how hard it really is to learn in full.

To be honest, I recommend faking to yourself that you're in your 20s still :) Much healthier attitude.


Isaac Newton invented calculus in his early 20s


He also nearly killed himself with alchemy experiments. He was very right about some things, and very wrong about others.


And he started doing this in his 30's/40's when he had stopped contributing to physics. Maybe you grow more senile the more you age.


how many pioneering mods / hacks did you do in your 20's ?


Like most hard problems, it's easy to pick off the low-hanging fruit and claim that you have solution.

Self-driving cars (in some form or the other, under some loose definition of "self" and "driving") have been around since the 20s. But it still remains a vexing problem.

It is quite easy to program a car to stay between 2 cars and follow the car in front. It is quite another to have the same car drive on (a) a road without lane markings; (b) in adverse weather conditions (snow, anybody? Hotz should take the car to Tahoe); (c) in traffic anomalies (ambulance/cop approaching from behind; accident/debris in front; etc. etc.); and so on.

No offense to GeoHot, but I'd love to see his system work in rush-hour 101 traffic; or cross the Bay Bridge, where (coming to SF) the lanes merge arbitrarily.

The key challenges are not only to drive when there's traffic; but to also drive when there's NO traffic, because lane markings, etc. are practically nonexistent in many places.

Having said all that, I still admire his enthusiasm and drive(no pun intended). Tinker on!


TBH, since it's a training based system it's "just" a matter of making sure the training set is large enough, including the situations you mentioned (assuming the training method is robust, generalizes well, etc). I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on) -- and there are many of them I suspect (at least 100s?). Estimating you'd take about 1 hour between experiencing a tricky scenario while driving around, this should put the number of hours at something like 100,000+ -- not easy to come up with by himself (that's about 50+ years of driving 6 hours a day).

Mobileye is doing something interesting by curating the reliable parts of the dataset (e.g. they have curated databases of traffic signs for each region) -- again not something you could do own your own, and seemingly archaic (hence GeoHot's criticism), but if you can afford it can speed up the training significantly.

Tesla is a massive resource here because they already have a huge fleet of internet connected cars proving enough data to fill the aforementioned training set in a matter of days or months: let's estimate their fleet at 40,000 cars -- then they could fill that minimum dataset in less than a day, and in a month they might have a 100x safety margin. Of course, there's a big technical problem of relaying all that video (maybe they just relay prediction failures), but the data is there.

Another fundamental problem with exclusively hands-off training (and little optimal control theory, etc) is picking up bad habits from drivers -- even the best algorithms will have a hard time and be only about as good as a good driver in each scenario, in the best case -- since the training data is acting as a ground truth.


> I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on)

The problem is: there are new edgecases born every day.

Consider, for example, an accident where the cops have set up flares. How often do you come across one of those? Very rarely, I imagine. And even if you did come across it in your training set: how does the ML know that you are following the cops' signals, and not just randomly switching lanes? That the flares are a critical signal?


Good point, but if you consider the Tesla dataset... it's formidable. Every day they could collect data enough for ~55 years of driving a lot every day. Even if you never encountered this case, if it happens at all it's likely to be seen many times (probably 100+ in a few months) in that dataset. After driving cars have gone mainstream, this may start to be seen as a design problem by traffic agencies: they might standardize ways to deal with traffic a little more.

Ultimately as long as the cars driving autonomously is small enough and procedures change slowly enough you should be able to continously update the driving system.

But let me reinforce that a pure learning approach even with very large datasets may not be efficient as one would like -- the curation of signs is a good idea, and manually reviewing accidents and near misses (a highly human-intensive task) and perhaps flagging bad driving behavior (probably after some outlier screening, which can be good or bad) will be important to get it really good with the training-intensive approach (and not the top down optimal path planning and control approach).

EDIT: Mobileye CEO discusses some interesting design issues and manual validation (and shows they have lots of data, good sign) https://www.youtube.com/watch?v=kp3ik5f3-2c&feature=youtu.be...


> I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on) -- and there are many of them I suspect (at least 100s?).

It depends on what sensors are in use and how the environment affects them. I can't get into much detail unfortunately, but I have seen radar systems that use naive Bayes classifiers for target detection and classification. Those systems required large numbers of examples across a large, multi-dimensional space to work effectively. Target detection and identification is a trivial task compared to what the control system of an autonomous vehicle needs to handle.


what if a driver does a mistake, like running a red, and doesn't get a ticket?

who validates all this data?

attaching a dnn to a driver as a training set is a pipe dream, for now. maybe after we understand how our brain perceives time and build models of future outcomes, we could apply it to build better nn. For now, nn are just best used as classifiers in a controlled environment, not from an environment with unpredictable states.

and especially not in an environment with adversaries http://spectrum.ieee.org/cars-that-think/transportation/self...


The vulnerability to sensor error (adversarial or not) is certainly not exclusive to nn based approaches. I commented on the validation problem in the comment above and in another below, and one way to deal with it is simply manual validation (mainly for false positive elimination). Indeed this approach with dnns is already being employed by Mobileye, so I don't think it's a pipe dream.

Sensor failure or well characterized adversarial inputs are actually really easy to deal with -- they are very easy to simulate with a given dataset and self-validate using traditional techniques -- simply make one or more cameras fail (or receive spurious sigs) and verify the output.

It's a good point that probably all autonomous cars will need a contingency plan (probably human intervention and/or blind emergency stops) with non-zero probability -- even if you have a redundant network of cameras around your vehicle a critical number can and will occasionally fail (when you look at the fleet sizes that will be dealt with).


It's somewhat frustrating that he continues to get the credit for "hacking the iphone" when he was neither the first nor the only person on the project. The "iPhone Dev Team" was a group of five to ten people who built tools to jailbreak the phone and unlock the radio. If anything, the first person was a guy called Nightwatch who was also associated with various .tif exploits to unlock the PSP. As near as I could tell at the time he worked in some capacity for a South American university. Geohot worked only on the baseband unlock and was forced out of the closed discussions when he released exploits before everyone had time to prepare. This is important because some peoples participation in the project could have potentially affected their employment. Luckily I don't know that anything bad happened, but suffice it to say the kid is not a team player.


Take some time to watch his YouTube video explaining the iPhone hack.

He starts the video by giving credit to other people involved in the project.


Nice. Can you link me? I was involved in the project :)


what was your osx86 handle? I was also involved in the early times but don't recognize your name.


It was ixtli. I wrote iPHUC and still own the repo, i think: https://code.google.com/p/iphuc/source/browse/trunk/AUTHORS


ah the very early days then, nice. yeah I understand your take, George did get credited for things that were a team effort (including a lot of work by gray). but really a lot of that was the press who just based it off him being the one to publish the unlock demo video, he just probably got tired of correcting people.

as a counterpoint in which he was definitely a team player and not many knew about, he helped us get iOS 3 for S5L8900 devices pwned when we couldn't get the firmware files decrypted due to tricks that Apple put in iBoot (for only that version too - taken out after), which involved using a built in coprocessor and a payload in an assembly language that wasn't ARM and none of us could recognize. so to try to work it out he actually reverse engineered the structure of the assembly language to help figure out what it was doing, which was really cool. I don't think the whole payload got fully "reversed" as I believe one of us (it was either ius or myself) found a data sheet that pointed to it being some type of 16-bit RISC based thing, but was still pretty cool to see how he went about solving the problem. there was nothing directly incentivizing him to help other than a fun challenge.


The 21" monitor portrait-style in the car is fantastic.

The testing of a hacked-together system on the public road is not. He probably won't kill anyone, but if he were to I suspect he'd get the book thrown at him in the way that everyday death-by-DUI drivers don't.

Actually I'll go futher with this criticism: we've just seen drones being FAA regulated because users were unable to refrain from doing dangerous or nuisance things with them, such as flying near airports. DIY self-driving car research is similarly likely to damage the concept if it goes wrong.


Yes, holy shit does it look unsafe !

There's a good reason began on closed courses, imagine a kernel panic locking the car on a busy freeway...


But unlike drones, he is in the vehicle the entire time and can easily intervene.

The article does a good job of explaining the inherent danger someone (and those around him) faces when attempting something like this. The point is that he's attempting to push the boundaries and not asking for permission -- generally the conditions required to innovate. Maybe that should be celebrated before you bust out the "b-b-but it's dangerous!"


That screen looks like the mother of all distractions.


Which theoretically doesn't matter if the car drives itself.


For comparison, a similar hacker spirit underpins Tesla Motors propulsion tech: Back in the early 2000's, there was a young engineer driving around Palo Alto in a brilliantly hacked electric Porsche 944, which would do about 130mph on the highway.

His name was JB Straubel, and nowadays he's Tesla's CTO.

Best of luck to Hotz!


> The last people with jobs will be AI programmers

Geohotz makes a decent point. The way the industrial revolution reduced manual labour, and made thinkers and tinkerers much more valuable, the advent of AI (true AI, mind you, not the tiny stuff that we currently assume) might actually make us obsolete. It is a peaceful and yet terrifying thought.


>The way the industrial revolution reduced manual labour, and made thinkers and tinkerers much more valuable

Didn't the opposite happen? Suddenly artisans were out of business as nobody wanted beautiful hand-made luggage, or whatever, that would last decades, but instead we'd get some engineers and maybe consults an old-school luggage guy, get some specs, and toss that data into cheap mass production and be done with it. Sure they're uglier and last only a few years, but they're cheap and plentiful. Now those artisans had to close their shops and either go work in the factory or retrain for something else.

The I.R. was great for making things fast and cheap, but it was pretty much a full frontal attack on "maker" culture that has only rebounded in certain areas fairly recently, namely computers, because you can own the "means of production" trivially, namely a computer with a compiler.

I see AI going the same way. It'll be shoddy, cost jobs, and work "just good enough." The question is is "just good enough" appropriate for 5,000lbs death machines speeding at 75mph? Think of all very serious development protocols and standards aerospace and NASA use. And that's for machines that will never have to worry about poorly painted/paved roads, kids running into the street, snow covering everything, etc. I think the AI-guided future will be far off and hit a lot of practical limits quickly and the bullish attitude behind AI is more than a little unjustifiable.


Technology innovation shifts the economies of art. For example, representational painters lost a lot of business to photographers. On the other hand, art photography became an industry.

There weren't a lot of people employed making movies or TV shows before the industrial revolution; now those industries provide lucrative employment to millions. Maker culture shifted with the technology.

That said, I agree with you that AI is being pretty dramatically oversold (again).


I am not convinced that computers will ACTUALLY REPLACE ALL jobs. Technology could get advanced to do all that humans are accomplishing today, but that doesn't mean that humans will not be doing any jobs any more. There are 2 main problems I see in that

1) Historically technology has created more jobs than it has destroyed. http://www.theguardian.com/business/2015/aug/17/technology-c... One BIG difference, technology till now has been designed to aid humans, not replace them. But I think that technology will shift a lot of jobs in another sector but will not bring the sum of human brains needed to run it to 0.

2) Society will not be so nice to organisations taking away ALL its jobs. There will be a huge friction from society when they loose out ALL their jobs at this scale, it could vary from small riots to some new age guerrilla warfare. It will be interesting to see how politicians will react when they loose their jobs to computers though.


I've thought about this a lot before actually. In my opinion, the last people with jobs will be mathematicians. If you think about everything that would be hardest to automate, I think deep mathematical theory would be the most difficult (automated theorem provers have their limits).


I think this misinterprets what the history of the industrial revolution teaches us. When steam shovels replaced humans, it's not that only steam-shovel-makers had jobs anymore. Instead, new humans who would have been employed breaking rocks, were instead free to go invent or work in entirely new industries.

Entertainment, health care, and communications are industries that have exploded, in terms of economic value and employment, since the industrial revolution. One thing these industries have in common is that the value they provide is dependent on humans. People want to talk to people, see stories about people, and be healthy themselves. Technology enables these industries but technology does not replace the heart of the value.

Maybe someday if AI evolves into artifical persons, then AI could take part in conversations, be entertained, and wish to prevent it's own death. But that is probably a long way off, and might never come. It's not like people are working on artificial personalities now. No one is working on car AI that will tell its owner to fuck off, it's watching Mad Men and doesn't feel like driving.

The industrial approach to AI is to build better tools. Not to build new independent beings. Maybe it's possible that a tool can become a being, but we have no evidence at all that such a thing can happen.

So the future of employment will probably not be unemployment, it will be different employment. Truck driver jobs will get replaced by truck-driving AI, but the children of truck drivers won't miss it. They'll grow up in that world, and work in some other new industry, like virtual world designer or personal medical consultant or artist or athlete.


And chess players. Computers will never beat the best chess players.


Jonny is being sarcastic--computers have already beaten the best chess players.

But I would point out that even though that is true, people still make money today selling chess sets and chess software and chess books and chess tournament tickets to other people.

edit: clarity


Ah, that explains it. Should have read parent post more carefully.


I believe this is not true. There are chess engines with ELO ratings several hundred above Magnus Carlsen's.


A chess game is, in the end, a massive tree of all possible moves that can be made. Once computers are powerful enough to simulate all those possible moves, they can be sure of winning (or at least not lose) every game against a human starting from move #1. Just like tic-tac-toe.


> the advent of AI (true AI, mind you, not the tiny stuff that we currently assume) might actually make us obsolete. It is a peaceful and yet terrifying thought

We will become the children of our new AI parents. We are building our future caretakers.


And why can't true AI be capable enough to program AI? Don't we compile C compilers with C?


That's the point--the LAST people with jobs will be AI programmers, but they probably wouldn't have long to celebrate that status.


Can you point to one such job where someone lost his/her job to "AI"?


Look at any automated warehouse these days. Pickers, stockers, warehouse managers..... Those are going by the way side.


Would you consider those machines AI?


Narrow AI, yes. If it simulates a human action and adapts to the environment to a small degree (e.g can recognize a screw bolt and pick it up even if it is in a different location every time), it's a narrow AI.


It can do even more than that. Warehouse organization, putting high usage/fast moving objects towards the front of the warehouse, reorganizing based on usage and expiration, restocking, kit packaging etc... Goes way beyond just picking things up with a fork lift and moving them to a different location. All things a person or team of people used to get paid in the 6 digits for are now just run by robots and AI.


strong AI? no- but then, that hasn't been achieved yet.

but weak AI? absolutely. Look at amazon's picker bots.


This is a much better question than I initially thought. I started out listing things, but I'm not sure any of the cases actually include AI as opposed to general IT/Robotics/Globalization.


Financial reporting is likely on front lines http://www.npr.org/sections/money/2015/05/20/406484294/an-np...



Travel agents.


Librarian


1-800-555-1212 operators replaced by Tellme.


How on earth is this a peaceful thought?


Because wage slavery is ultimately not much better than slavery, and its abolishment does not necessitate the collapse of society. A world where no one is forced to work is a better world.


Wage slavery is just fine as long as I can feed myself. Time has proven that the fantasy that we will all kick back and let the machines do all of our work is a fallacy.


Just because it hasn't happened yet, doesn't mean it won't happen in the future.


I don't see any reason why I'd love my (true-AI) artificial children any less than my real children :)


You won't love them less, but they will grow up in a world where they are not necessary. They will struggle to feed themselves, and the value of their effort will be low. They will be relegated to mundane, highly simplified tasks. Economics are cruel.


> Amazed, I ask Hotz what it felt like the first time he got the car to work.

>“Dude,” he says, “the first time it worked was this morning.”

I can't tell if this is a joke or unbridled hubris. Either way, self driving cars seem like a new hacker space.


Most of the article reads as a stylized joke, written to make Hotz seem like a Tony Stark sort of figure.

I'm sure he has a good handle on the tech running the car, he's at least looked at licensing it to test on the freeway (learning from the backlash on the Jeep Hack), and he had tested the car extensively before inviting a reporter with children to test it at 70mph on the freeway.

If you include these details and less of the sassy dialogue and flourishes of panache [1], you get a less interesting article.

Bloomberg goes for sexy. Take their reporting with a grain of salt. It's more art than facts-based.

[1] At one point, the virtual-reality company Oculus Rift failed to man its booth at a job fair, and Hotz took it over, posing as a recruiter and collecting résumés from his fellow students. None of this was enough to keep him interested. “I did two semesters [at CMU] and got a 4.0 in their hardest classes,” he says. “I met master’s students who were miserable and grinding away so that they might one day earn a bit more at Google. I was shocked at what I saw and what colleges have become. The smartest people I knew were in high school, and I was so let down by the people in college.”


> Bloomberg goes for sexy.

I think it is more about the reporter's writing style. The reporter -- Ashlee Vance -- is also the author of a recent book on Elon Musk.


Good call, seems as though this author has built his career around constructing SV cults of personality. This piece is just frothing the waters a bit for his next book on Hotz.

I stand by my assertion that Bloomberg goes for sexy. You can see it in their design, marketing, choice of topics, and choice of contributors. That's not to say they don't put out great and entertaining articles. Just that it always pays to think about who is backing, writing, and profiting from the publicity of the article.


Also: "Hotz hadn’t programmed any of these behaviors into the vehicle. He can’t really explain all the reasons it does what it does."

Good luck to him understanding how to fix corner cases: he's built a black box.


You're describing the fundamental tradeoff with all neural networks.


This will be safe on the road for sure.


Are YOU safe on the road? Have you tested all corner cases on yourself?


> “Frankly, I think you should just work at Tesla,” Musk wrote to Hotz in an e-mail. “I’m happy to work out a multimillion-dollar bonus with a longer time horizon that pays out as soon as we discontinue Mobileye.”

> “I appreciate the offer,” Hotz replied, “but like I’ve said, I’m not looking for a job. I’ll ping you when I crush Mobileye.”

> Musk simply answered, “OK.”

I have to agree with Elon here, Hotz is such a good fit there. But Hotz knows best, if he thinks he can take down Mobileye then he did the right decision, sucks that Tesla wouldn't back it. I'm sure other car companies would buy Hotz's software


I dont think there is anyway that Hotz's and Musk's personalities get along well in any type of working relationship. They are both difficult people to deal with (from an outsiders perspective).


That's the funny thing about INTJs; they don't seem to get along well with each other.

(*cue the preemptive disclaimer about MBTI being for entertainment)


Hotz is an ENTP actually so the two of them would get along extraordinarily well, assuming ego's weren't in the way...


Ah, indeed. I forgot the part where he mentions that he doesn't like living alone. You sure about the P though? His thoughts on his projects include lots of specific plans and he seems rather certain about his decisions (whereas P types tend to get caught in a quagmire of possibilities).


Agree, but I'm sure Musk has a tremendous amount of respect for Hotz not only turning down millions to forge his own path, but also for simply executing. Hotz is putting his money where his mouth is and that is something Elon has a lot of first hand experience with.


I'd be far more interested in a piece of software for my car put out by Hotz than either Google or Tesla. Because the latter two are almost certainly going to keep it proprietary as heck. (Tesla has notoriously actually called people to ask them to stop tinkering with their car when they detect the interface has been connected to.)


you're sure to have a more interesting ride with hotz's software.


> "He says he’s come up with discoveries—most of which he refuses to disclose in detail"


We'll see where things get in the future. If I wasn't sure where I was going yet, I might hold a few things close to the vest initially too. There's a short-term value too, where he has to ensure Google or Tesla or Uber doesn't become too dominant first. I really hope self-driving systems are an ecosystem, not a monoculture.


He seems like a pretty cool and level headed person. If you watch the video, they're working on phase 3 of car automation which is basically when you're on the highway (or on the smaller roads) and the car takes over for you. It seems like google is working on phase 4, which I feel is basically too far off (no reason for us to need cars that can drive themselves without anyone in it). Also, Tesla, Mercedes, those are all phase 3 (Autosteer).

Also pretty cool he's working in his garage :P.


he says phase 3 takes 99% of your driving task. So I guess it would be way more than that.


I think this number of "99%" is only valid for the large US suburbs which are way easier to navigate than a typical small town in Germany/Europe. Over here we have so many narrow streets, millions of pedestrians and bicycles, construction zones and so on. More often than not it is difficult for a human to figure out what is going on so it will be much harder for a car to do the same.


Yup, I thought about Marseille the other day and I don't see how a self-driving car could drive there.


>The smartest people I knew were in high school, and I was so let down by the people in college.

He seems like a good person to get into business with. He's so non-judgmental. Reminds me of myself and all the stupid things I said to VCs in my 20s.


I'd like to give him the benefit of the doubt and say that this is a comment on how smart people often accept stifling constraints on their thought and actions "because that's how it's done."

It is my belief that this self-driving car project is really a kind of first for Hotz. He's famous for finding and exploiting the error-modes in what other people built; now he's actually building something. Methinks he's about to learn a lot about himself and the nature of the world. He's clearly extremely smart, so I'm actually rather excited for him!


Aren't judgmental people the best kind of people to get into business with? At least as long as they can judge people/situations accurately. Because that implies that they will make better decisions for your company w.r.t hiring initial people and choosing strategies


Being able to assess people is completely different from being judgmental.

The former is a result of experience and social intelligence; the latter is mix of bad attitude and superiority complex.


@imgeohot - Before launch, you should look into a communications protocol between the vehicles. It appears to me that the new LiFi standard might be perfect. You might be able to use the laser range finders themselves to communicate between vehicles.

What to communicate? I'm not sure, to be honest. Road conditions or notifications of the position of obstacles is one obvious thing. Advertising the current version of the software and pushing signed OS upgrade binaries is another. Voice/Video chat with other vehicles in range would be cool, as is media syncing and discovery.

Building in some kind of Bitcoin based payment protocol would be fun too. You could load your cars Bitcoin wallet with some funds and tip cars around you all over the LiFi.

I'm not saying you need to build all that stuff, just put in a good hackable messaging protocol into the system before wide release :-)

Great work man. Good to see people with a good hacker ethos accomplish really cool things.


The cars could signal a turn ahead of time, or indicate how hard they're about to brake. With a large enough network, they could even predict traffic patterns and choose a different route.


>At Google, he found very smart developers who were often assigned mundane tasks like fixing bugs in a Web browser; at Facebook, brainy coders toiled away trying to figure out how to make users click on ads.

I'm not sure those two are equally horrible though - fixing complex bugs requires using lot of skills and the high you get when you finally nail it is nothing to miss.

Getting people to click on ads though - that's genuinely depressing.


Ok cool, you find one of them enjoyable. He (subject of the article) finds them both terrible.


Yes, anyone can find anything enjoyable or terrible - but the point I was trying to make was that most hackers enjoy bug fixing by nature and Geohot seems to be exceptional that he despises bug fixing as mundane.


Hackers enjoy bug finding or simply building new stuff. Why would they like fixing bugs? Why would any developer enjoy it for that matter? Yes, it's something necessary that all developers should know how to do, but that doesn't make it enjoyable.


A nagging bug is ultimately one more thing you don't understand. I don't know about anyone else but as someone who loves to take apart things I don't sleep well when I find there's something out there that I don't understand. Besides bugs are a given if you're building something new and if you don't find them interesting you're not going to fix the most challenging ones. If you ever followed Linux kernel development you'll find many hackers enjoy fixing challenging bugs.


Sorry, but how can this be legal? With his homemade solution, he is not only endangering himself but all the other people in the cars around him.

Usually before you are allowed to use something like this on a public road your stuff has to be tested and approved by the state. At least this is how it is in Europe, does this not matter in the states?


He's sitting behind the wheel ready to take over, it's not quite the same as making a cup of tea while the car drives itself.


Yes, but what if his system suddenly decides to turn hard right for no reason while driving fast and he runs into another car, human, tree, whatever and he has no chance to react quickly enough? This is different to a PS3 crashing because he made some error.


The steering is massively torque limited by the car's EPS module(5x lower limit than Tesla). It can't turn hard right, it can lazily list to the right, giving you tons of time to react.

I actually have put a lot of thought into safety :)


How do you train it for emergency situations (i.e. a car suddenly turning left in front of you)? I'd imagine it would be hard to get many of those in the training data set.


Easy, you drive in a simulator! (ie: backfeed/simulate LIDAR and camera data using a video game (GTA5 for example)). Then try to simulate lots of near-crashes, reactions to traffic lights, signs, etc. Hotz's code just reacts to inputs. Simulate the inputs and you can run any training case you want.


Exactly the plan for outlier cases. Though perhaps not GTA...


Maybe partner with the creator of the XXX simulator series, you could make some kind of "MMO" where each participants have to run errands the safest way possible while interacting with each other, and upload the training data. Some player could be randomly elected as "maverick" whose goal is to crash and cause accident, the other player would have to handle them.

And if driving in highway is a problem, why not use a test terrain complete with fog generator? With RC car representing pedestrian, other car, animal, ... feed the video first into an AR system and then give it to the neural net.

anyway good hack, I wish you well :)


email me (profile) if you need a simulated driver ;)


Have you got some screenshots/videos to share of the car screen doing its thing? The Bloomberg video did a pretty good job, but, we're geeks and we need more of that stuff.


Yes, what if, of the thousands of people that will die today [1](and tomorrow and the next day and the next...) in traffic accidents, one single sprained ankle or whiplashed neck was caused by, gasp, a computer.

Your faux ethical hacking outrage is thinly veiled and entirely misdirected.

[1]http://asirt.org/initiatives/informing-road-users/road-safet...


Huh? If he had skipped the computer and just taken a bunch of phencyclidine before getting behind the wheel, couldn't have deployed the same derisive dismissal about arguments that people shouldn't drive impaired by phencyclidine? Not many people are going to die today because of PCP either!


You are, of course, assuming that his system was built properly to maintain manual override. If the vehicle is drive-by-wire, I have no idea if his way of hacking the vehicle would have possibly impeded those systems.


I bet the override is the joystick.


There are laws[1] allowing public road testing of such cars in a few US states and EU countries.

[1] https://en.wikipedia.org/wiki/Autonomous_car#Legislation


I doubt this is meant for single individuals instead of a company. I also doubt it is allowed without prior notice.


If he wanted to be a company instead of an individual for legal purposes, he could probably incorporate in his own name fairly easily.


He has a company: http://comma.ai.



I mean http://comma.ai is the self-driving car company. (Reactions was a hackathon project, not a real co.)


If Elon is seriously interested in what he can do, why doesn't he let him use the Tesla test track? That's a safe space for him to try out his solution and affords a control scenario to compare against MobileEye/Tesla.

And I think you missed the part where he wants to drive for Uber, so he's also endangering the passengers in his car. (!)


Pretty sure that driving for Uber is a way to rake up more miles (training data) rather than actually test the system.


Maybe this is another reason he isn't joining a corporate entity for his project. The insurance and legal liability will prevent him from testing and gathering "in-the-wild" data. Very risky and dangerous process that, i think in his mind will give him a faster improvement curve, compared to indoors and confined testing.


I don't know what's going to happen with this project of his, but this certainly is an interesting article:

>Sitting cross-legged on a dirty, formerly cream-colored couch in his garage, Hotz philosophizes about AI and the advancement of humanity. “Slavery did not end because everyone became moral,” he says. “The reason slavery ended is because we had an industrial revolution that made man’s muscles obsolete. For the last 150 years, the economy has been based on man’s mind. Capitalism, it turns out, works better when people are chasing a carrot rather than being hit with a stick. We’re on the brink of another industrial revolution now. The entire Internet at the moment has about 10 brains’ worth of computing power, but that won’t always be the case.


Regarding slavery that's a fatalistic attitude that's not very good for the advancement of human societies.

The other side of the coin of his "technology outgrew slavery, and thus we got rid of it", is that if tomorrow's technology demands some moral monstrosity, to "work better", we can't but bend over and accept it.

Neither human struggles (from the civil war to MLK and Rosa Parks), nor desires and leadership come into any of this, neither is "working better" (towards what? for what purpose? etc) defined.


I agree. But still quite an interesting article, one of the more entertaining reads I've had in a while.


re >slavery ended is because we had an industrial revolution that made man’s muscles obsolete

I think that's basically wrong also. It fits better with the advent of the printing press than the steam engine.


Surprised that no one else commented on this: It is completely mad and irresponsible to test a self-driving car on a public highway especially since the one who has built it admits that he has no idea what it is doing. Hotz is putting other people's lives in grave danger and everyone is applauding him for that.


>the one who has built it admits that he has no idea what it is doing

When did he do that?


No time to read the article again to find the quote but the problem is really inherent to the technology. It is very difficult to know what a neural network extracted from the training data, and therefore it is extremely hard to know how it will behave in the future and especially in emergency situations that were not part of the training data.


Hardly. Way worse stuff happens on the road by dumber people. I bet if he was pulled over, cops would probably be impressed then order it towed to his house. No charges.


What do you think would happen if his contraption caused an accident with multiple fatalities?


I met Hotz at SpaceX, and can assure you he's not as cocky as this article makes him out to be.


During my internship at Google I watched Hotz give a talk on QIRA and his Pwnium exploit.

George Hotz working his magic on the computer is the most fucking legit thing I have seen in my life.


Was that talk made available online? Would be curious to see it!


Regrettably, not on the Internet. If you get a job at Google you might be able to search for it on the intranet :)


Absolutely exciting stuff. Imagine if you have 100, 1000 or 10000 cars each with deep learning software on board. Have them all upload data after each drive to central repository and download updates from other cars. You might start without stuff like 'react to that deer that just jumped on the road' but when you have 10 000 or 100 000 cars that learn and share their knowledge between them you'll quickly learn a lot of corner cases.


"“Hold this,” he says, dumping a wireless keyboard in my lap before backing out of the garage. “But don’t touch any buttons, or we’ll die.”"

Quality.


Self-Driving cars are very exciting but we know it can be done - super cool that Hotz got this working. Now he could really impress the community if he could solve 6 additional concepts http://gizmodo.com/6-simple-things-googles-self-driving-car-...


  He thinks machines will take care of much of the work tied to producing food and other necessities. Humans will then be free to plug into their computers and get lost in virtual reality.
Well, that's an astronomically depressing future.


Said a person who probably spends 10 hours a day staring at a computer


The more technology can seamlessly take care of the foundation of humanity's needs, the better. Leisure is a very new concept and only came about when humans were able to destress from not having to worry about their next meal or getting attacked by predators. Despite all the progress, much of humanity still faces some basic problems due to inequality and access to resources. Technology combined with some other cultural and structural changes can do away with those issues for good.


Of course, but once our needs are seamlessly taken care of we're going to just "plug into our computers and get lost in virtual reality"? That is the depressing bit.


I think he's made the assumption that virtual reality would be the ultimate form of leisure since as the article states, it would allow us to experience things our world can't create on its own. We'll leave it up to each individual to decide whether that's depressing or the ultimate goal.


Hey, I thought the same when reading this paragraph! :)

Are you in for living in the woods and preying for food when humanity becomes a society of degenerate VR junkies?


Looks like the car he is using already comes with adaptive cruise control and lane keeping assist[1]. Can someone with more knowledge on the subject chime in on how/what he is doing that improves upon those?

[1] http://www.acura.com/Features.aspx?model=MDX&modelYear=2016&...


I always thought default Ubuntu WM is only used by the people who don't know how to change it.


Or those of us who don't care about it enough to change it. There are a lot of things that in the right context are not worth bothering about.


Congrats geohot. Come up with a good development framework for people to build on and it would be awesome. This is good innovation and engineering.

Like the article said it sure beats writing code to make people click ads or fixing some obscure deadbeat bug in some useless software which nobody uses.


“I don’t care about money,” he says. “I want power. Not power over people, but power over nature and the destiny of technology."

This has echoes of J.R.R. Tolkien:

Anyway all this stuff is mainly concerned with Fall, Mortality, and the Machine. By the last I intend all use of external plans or devices (apparatus) instead of development of the inherent inner powers or talents -- or even the use of these talents with the corrupted motive of dominating: bulldozing the real world, or coercing other wills. The Machine is our more obvious modern form though more closely related to Magic than is usually recognised. . . . The Enemy in successive forms is always 'naturally' concerned with sheer Domination, and so the Lord of magic and machines.


"George Hotz will be a panelist at Bloomberg Businessweek Design 2016 on April 11, 2016."



Can anyone recommend a AI/economics book regarding the implications of a population where jobs are no longer necessary?


https://www.youtube.com/watch?v=7Pq-S557XQU

Also, a book with the same name


This the same guy that Sony wanted to put in prison for figuring out how to run code on the PS3...

That stunt is also what lead to a coordinated attack against PSN that took the service down for more than a month.


Link?



If it passes the written and driving tests at the local DMV, should the car be given a driver's license?


no, because that's silly.


Is it really the best approach to only train from real world scenarios without any programmed constraints? Most humans are terrible drivers and there's a reason so many people die every year in car accidents. It seems like his approach might be more organic but it'd also be really hard to provide training data around emergency situations as others have mentioned here.


If a self-driving car is designed around neural networks, then does that remove the liability dilemma introduced when such a car is involved in an accident? The car panicked and crashed.

If we could move the liability to the car itself, then maybe we could just add the car to its own insurance policy, you know, as if it were a dependent, like a teenage driver.


Wow, here's a ton of possibilities! The neural net powered bank computer stole your money and used it to pay the CEO a bonus - move the liability to the computer! The possibilities are endless.


I meant to say that the liability moves to the guardian which is how common insurance schemes work when dependents are involved.

Clearly, you can't meaningfully punish computers. Hah, or banks, for that matter.


Interesting to see an nvidia shield box on his shelves [0] - I've been playing with one, and the Tegra X1 SoC in there is an absolute beast. Nvidia are pushing this chip for automotive, supported by freely available learning and vision toolkits.

I'd not be surprised to see some interest and support from nvidia on this (if not, then they should REALLY look into it).

[0] http://www.bloomberg.com/features/2015-george-hotz-self-driv...


The NVidia Jetson TX1 platform is indeed very interesting, it has the VisionWorks API with lot of CUDA accelerated primitive useful for autonomous driving : SLAM, Optical Flow, ... https://developer.nvidia.com/embedded/visionworks


I really like his 4 stage definition of self driving car. I don't really care about the fully autonomous like the google car. I've driven the adaptive cruise control VW in europe and that was an amazing experience. The only thing missing was lane control which this guy has done. Personally, where self driving really shines is long trips on the highway. All I really want is smarter cruise control that can stay on one lane and not bump into anything, and ideally send an alarm if it thinks it needs help.


This one takes the cake for me: “It scares me what Facebook is doing with AI,” Hotz says. “They’re using machine-learning techniques to coax people into spending more time on Facebook.”


His AI strategy that doesn't use IF statements sounds influenced from the Sussman & Radul paper the Art of the Propagator. In this related course you also learn how to program AI decisions based on pattern matching like him giving space to a cyclist and the AI later doing the same https://groups.csail.mit.edu/mac/users/gjs/6.945/


Can we get some third party verification here? I live in Potrero and would like to take a ride in his car, or at least help out with his project... hit me up man!!


He does a bit of interesting projects, hacking the iPhone, Android and even the PS3 to the point of being sued by Sony [1]. Geohot has potential, so it will be interesting what we see him accomplish, hopefully some company doesn't swoop in and ruin his progress.

[1]: https://www.youtube.com/watch?v=9iUvuaChDEg


The article mentioned Elon's delaying tactic. I wonder what would happen if Hotz's idea/project was bought out by Tesla.


Geo Hotz amazes me!! With that said, can he be prosecuted for using his driverless car on public roads without a license to do so?


Possibly, since California I think has specific laws governing self-driving cars, and it sounds like he wants to test it there. If he's in a state that doesn't have laws on self-driving cars, it'd probably be up to existing laws. Is he driving recklessly?


> ‘I know everything there is to know.’ 

Except the law when it comes to exceptions for being in control of your vehicle at all times. Somebody take this guys license before he kills someone due to a divide-by-zero. Testing this in an abandoned parking lot would be ok with me (probably still against the law but fine). In traffic is a definite no.


Probably best that he's working on his own, doesn't seem like the kind of guy you'd want to work alongside.


Or people tinkering with far more interesting things...



Good to see smart people working on something actually useful, and not another group chat or instagram clone app.


Yes it is. Such smart people should be working on these things and should be encouraged. Of course his system is still a lot rough and a lot of work has to go into it but he is just one person doing a lot of thinking. In the interview he has a lot of enthusiasm which is really nice.


> In the coming weeks, Hotz intends to start driving for Uber so he can rack up a lot of training miles for the car.

Really? I did not expected this from him. Why don't he put his sensors\cameras\kit on few other hundred\thousand cars and pay them some money or get some early adopters.


So how does his technology/software react on dangers? The video only shows how he keeps his lane...


Probably really bad now, or doesn't react at all and also probably will collide in critical and dangerous positions. But that is also not the future. The future is how does it interact in non-dangerous positions long-term while other cars are self-driving in compliance.


Is it that hard to detect a clear danger like a car that suddenly stopped in front of you on the highway?


Not hard to detect for the sensors, the questions is how (if) the software reacts.

He uses an artificial neural network to make the driving decisions.

You train them by showing millions of situations (images of the surroundings, information about the cars current state and information about the imminent past in this case) and the wanted reactions/solution/answer (how to drive in this case). Hotz probably has enough "normal" data, because you hold the line all the time when driving and apparently there were enough situations where the car in front slowed down and he slowed down.

From the latter case you could guess that the car would break more rapidly if the car in front stops more rapidly than in the training sessions with a human driver (you would have to test that, as you usually cannot just look at the trained neural network and see that, because they become insanely complex after a certain depth).

But probably the car never "saw" bricks falling of the truck before you or even a kid running from the left side. Those are edge cases, that are unlikely to happen, especially on a motor way. But I'd still like to know whether his software would just apply the nearest trained case and see it as noise, or notice that something is off here and alarm the driver.

Definitely a cool project, don't get me wrong. But I get kind of sad when the work of thousands of scientists who worked on the theory and the work of Tesla, Google and big car manufacturers gets dismissed by "but look at this guy" (in general, not directed at you).


it is. humans are really bad at that.


(This is completely Off-Topic but it's been bothering me for such a long time now and I never got sufficient answers)

Why am I seeing Ubuntu on Screens of developers, experts, et cetera in Cover Stories such as these, most of the time with the 100% plain Ubuntu Desktop with all the craziness that comes with it? It feels like this is the case 90% of the time. Two more (recent) examples I can remember:

1) Fyodor (Guy behind nmap) running plain Ubuntu on a Notebook while giving a speech at a conference

2) Developers at Honda (Video was an Asimo promotional video) running plain Ubuntu

Since in my personal opinion Ubuntu is not the technically superior choice in these cases (though that can be debated), it can not simply be explained with it being backed by a company, there being support you can buy for the system if you need it.

What motivates technically extremely skilled people to use "Plain Ubuntu" instead of one of the many alternatives?

I really don't understand, please enlighten me!

(I actually think it's worth "spending" some Karma on this if I for once get a satisfying answer)


Most of these people have passed the stage of caring about the desktop itself. I would say they are simply interested in a bash shell that sits on a reliable, well maintained, stable distro with good community knowledge.

When you spend all your time at a prompt you don't really care about unity vs cinnamon vs anything else.

Debian is stable and has fantastic package management, and Ubuntu builds on that with a vibrant community and wide support. For these users, Ubuntu 'just works' and lets them get on with what they want to do. It's about as simple as that.


I actually have thought about it some more and gave Ubuntu a spin and now I am 100% with you. Might seem a little weird, this "sudden" change of heart, but yes, you are right.

If I am honest, I spend 90% of my time in a terminal, browser or editor (which is also in a terminal), so give me a system that allows me to easily install my most beloved programs via command line and handy binary packages, a system that is reasonably stable, runs on most hardware, and that has a big community and a lot of documentation, sounds like a good deal actually.


curious what myztic would recommend as well? ubuntu may not be 'ideal' but it works and installs a lot easier than arguably any other linux distro. i have no problem using debian/arch/etc but if you are just trying to get up and going ubuntu seems to always be the easiest.


See also my other comment, I changed my opinion on that.

What I originally thought about were systems like NixOS (for serious development), or something more minimalistic that offers greater control like Slackware, a *BSD (Jails, byhyve, dtrace(!)), or even just Debian (less flashy, more stable and robust).


I personally use xubuntu. What myztic meant was using "plain ubuntu" instead of alternatives like xubuntu, lubuntu or kubuntu.

Xubuntu is medium light for me and is fast. Previously I had only fluxbox on Kubuntu for the KDE apps which was extremely fast but with bugs in fluxbox I switched.


Nope, I really meant Ubuntu compared to other Operating systems / Distributions.


It looks like he has various sensors for gathering driving data. But how do you really know when you have gathered enough dimensions of data in this situation? How do you train for edge cases?

I imagine there will still have to be some hard rules in case the AI encounters edge cases.



But my intuition say. Google has fairly large amount of data. Their cars drove much distance than his leading them more space to test. More data to test more intelligent systems will be.


So its more a fancy autopilot than a self driving car.


Its right there in the article, he is trying to beat Mobileye - a lane following technology Tesla rebranded as autopilot.


You're aware that "auto pilot" literally means "self driver", right?


Yet few people would call an airplane with auto-pilot a "self-flying plane", even with advanced autopilot that can take off and land itself.

Autopilot is understood to mean a limited computer assist that can help with routine tasks, but not expected to allow a plane to fly without a pilot.


What more would a plane need to be "self-flying" beyond taking off, navigating, and landing?


The ability to handle unusual emergency or unexpected conditions - the same thing that's needed to call a car "self-driving".


It means staying on a set course. Not the same thing, right?


A "fancy autopilot" can take off and land.


at this point, yes. clearly this is not the endpoint.


Would be interesting to see demo that is not just car self-driving in a straight(ish) line.


What a cool article!


The fuck. Hotz is awesome... The only coder with a skillset so diverse yet immersive I know is Fabrice Bellard. I'd love to see them on a team together... probably will invent true AI O.o


I really liked Hotz until he went to work for the dark side (google's android security) and decided to make smartphones harder to root instead of easier.


I dunno, it's still pretty easy. I think if google really wanted to they could make it quite a bit harder. And fixing exploits that give apps root access would prevent malicious apps from abusing it, no?


I love this guy's personality.


Great!


> 99%


He must have rich patents or a sponsor. The Lidar isn't cheap (50k).


If you read the article, it says he gets money from hacking competitions.



It looked like the VLP-32. The VLP-16 is pretty new, I am surprised it's already available, last time I checked it wasn't. Google cars use the VLP-64 which costs 70+k.


The guys hacked the PS3. That guy can build a self driving car.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: