Hacker News new | past | comments | ask | show | jobs | submit login
George Hotz Is Giving Away Self-Driving Software (ieee.org)
179 points by devnonymous on Dec 6, 2016 | hide | past | web | favorite | 164 comments

This seems like the kind of end-run around the system that tech folks tend to think is a clever life hack, only to find out that legal/regulatory bodies aren't fooled. Like trying to convince the judge that you weren't speeding because if you set your frame of reference inside the car, it was the road that was speeding, backwards!

I only hope that the folks hand-rolling this onto their cars don't injure too many people in their attempt to "disrupt" cars.

This isn't some "clever trick". It is genuinely not illegal to release a bunch of code open source on github.

The Government asked him to not sell his product, so he complied with the government's request.

> The Government asked him to not sell his product,

Well, the Government didn't that, they asked proof that it was safe before selling it https://es.scribd.com/document/329218929/2016-10-27-Special-...

They specifically used the words "delay selling your product", which is what he did.

He still seems to be operating a "data network", to collect driving data for further optimization. I think that part makes the "selling" moot, given that his business is based on input from this "free" app...

To give an example of another field, the OpenAPS project for diabetes seems to be walking a very narrow tight rope in their actions and statements in order to not be regulated by the FDA.


This doesn't seem to be a narrow rope at all. Code is speech, as determined by the Ninth Circuit Court of Appeals in Bernstein v. United States. The government cannot prevent you from publishing very detailed descriptions on how a regulated device would operate, which is all OpenAPS is.

It insulates him from the government, not civil liability. If someone uses this and gets killed, he can still be sued and have to defend himself out of his own pockets.

Well, IANAL but wouldn't the license of the software protect him from that:


This is more of a disclaimer which won't stand up in a court of law. If he released the software and understood the risk that someone could use it and get into an accident and get killed, then the responsibility is on him.

Also, since this isn't released by his company, he alone will be personally responsible without any of the laws that would normally shield a business owner from repercussions of something like this.

This is similar to someone releasing malware. The government doesn't just go after the guy who downloaded it and used to get a bunch of credit card numbers, they go after the person who actually wrote the software:

MegalodonHTTP Author Arrested in December RAT Raid - http://www.securityweek.com/alleged-author-megalodonhttp-mal...

Ransomware Author "Pornopoker" Arrested in Russia - https://www.bleepingcomputer.com/news/security/ransomware-au...

Blackhole Exploit Kit Author Sentenced to Prison - http://www.securityweek.com/blackhole-exploit-kit-author-sen...

To say he's taking a huge risk is an understatement.

Him, maybe. Let's wait until it is tested. But it certainly won't insulate johnny github from liability

It is an ethically shitty move, though. The software is not properly tested. People will run it thinking it's legit and probably some will get hurt or die. The law doesn't matter.

It's ethically shitty to release open source software now? People who pull down some source from Github that has 4 commits, thinking it's legit and rely on it to drive your car are probably not too many. Those that do, are generally hard to protect from themselves in any area of life.

7 years ago I completed a PhD developing the first control software for quadrotor drones. I mention this only to give a sense of time. Now every idiot with an arduino and a github account can crash one into my family at the park. Do you think the adoption of shitty self driving technology by unequipped amateurs will follow a different path?

You should have written in code to keep the things from crashing into buildings and people.

So someone is an "idiot" if they hack drone code without a doctorate degree?

The cost of both entry and failure for hacking on quadcopter vs car is orders of magnitude apart?

Unless you know something about the upcoming price of cars that you'd like to share ;).


In 5 years how many cars will be equipped with the basal level of sensors that make these sorts of hacks possible? My guess is more people will have compatible cars in their house than qudrotors....

Cost and chance of failure is poorly (or not at all) evaluated by the kinds of dangerous beginners likely to crash their car-pilot into a bridge.

The point wasn't about the ubiquity of the feature set in cars but rather the cost of bricking one/crashing one being well beyond what it is for a quadcopter. And the point still stands.

The people who are out there with quadcopters are the same people who would be doing these hacks at home on cars; some hobbyist/tinkerer subset of nerd.

I don't share your low opinion of their threat evaluation functions either.

Fair enough. Agree to disagree then.

I find it plausible that self driving cars with the same software can perfectly understand each other via predefined behaviour patterns.

If you follow that premise, it becomes less safe to drive manually.

So, under this premise, the modus operandi is to be automatic pilot instead of manual where perhaps manual driving is even illegal because it is less predictable.

The question wouldn't be, if this change happens; its when. Where's the turning point? One could argue its up to our politicians to decide. I'd be interested in data from other trends such as smoking. When was the turning point there, and what was the "market penetration" of smokers vs non-smokers or proponents and opposers?

> People will run it thinking it's legit

That might be believable if they didn't explicitly warn you and you didn't have to flash the phone yourself.

It didn't really strike me as an end-run but more like his interest was in solving the self driving car problem and when faced with solving the regulatory problem he said "no thanks" and walked away.

It is also a reason that as engineer (real degree, not self named), I support software being regulated just like everything else, not only on life critical situations.

Thankfully at least EULAs are void in most European countries and companies can be accounted for.

> I support software being regulated just like everything else, not only on life critical situations.

That would be the death of free software. I'm also a P.Eng., but if we're going to require professional licensing, it should only be for safety-critical uses. Shared tools and knowledge have made software systems better and more affordable for countless people. Let's not throw out the baby with the bath water.

How would that work practically? I am sometimes for regulation but I cannot fathom what it would look like.

People seem to be under the impression that regulation inevitably means "someone has to review/audit every single change that was made to any aspect of a product". But in practice there can be all kinds of provisions, from the weak to the strong.

A weak provision may be that at least a minimum of development and testing has to take place, with requirements on record-keeping and naming of responsible persons: For a simple device, or a simple software, this might take the form of a checklist or a questionnaire which you have to take before uploading code to the App-Store.

Yes, this seems very straightforward to fake, but at least then you have deliberately cheated a system designed to catch errors, and not "just written shitty code" (or built a flimsy mechanical device... when applying this scheme to non-software world). Which, I think, would already help a lot to make people accountable for their products/inventions/...

{for, example: with CE certifications, you, as a producer only state that you have built your products according to some harmonized standards, but the conformance itself is not tested by a third party}

With increasing security/privacy requirements, the mandated measures can could then be ramped up. Maybe scaled by permissions of an application (accesses otherwise protected data?), the ability to process sensitive date or maybe number of users?

The extremes measure would be a full line-by-line review on every change, when software is life-sustaining or a fault may have catastrophic (death of people, like running a pacemaker or steering a huge plane).

As a producer, CE certification is very disconcerting. I started out with the mindset of "how can I be certain that I'm compliant?", and the best answer is that it simply doesn't work like that. It's more of a question of whether you have enough paperwork to cover yourself in the (pretty unlikely) event of litigation.

There's some testing you can do, and you can sort of guess which product categories fit you exactly and apply those, but if your product is really innovative this is vague. You end up just putting a declaration of conformity in the box and a sticker on the product and hoping for the best.

I liked the CE process and thought it was pretty clear. You need to identify hazards and classify them according to probability of occurance and severity, then take appropriate measures to reduce the risk of harm. It's all spelled out pretty well without being a precise code (like NEC or UL 508A or something), so it's flexible enough to apply to whatever domain you're in.

We have all our code inspected and pen tested under different regulations and certification rules; if you would take that even a bit broader, no software would be written anymore; all programmers would be busy checking code.

And this software is not 'that' critical (financial). I would actually agree that life or dead software should be checked line by line by experts, software and well described processes. I would even insist that critical software that can be formally verified should be formally verified. But for a broader range of software this is not possible and well, I don't think it should be needed for innocent software like, say, games.

What could work for non critical software is having to pay for bugs if software is faulty based on the severity of the bug. I would be very much against the kind of liability that involves again kind of indirect damages, so the maximum of damage is limited to what you paid. Most people do not want that but you would want to have some or all money back in case of neglect (bugs that do not properly get fixed) without forgoing your license. That will kill companies (if turned back the clock with this rule, MS + Oracle notably) but then again, they won't cut corners like what happens now.

This is how medical devices are regulated. The FDA doesn't micro manage or guide your development. They just say "you must have a quality management system spelled out and stick to it. However it's written you must obey it. "

There's a little more to it. There's some CFR standards that you must adhere to (it's been 12 years, so I don't recall the CFR part number off the top of my head).

ANAE, but one form it could take is via a government or independent expert body with a mandate to inspect and certify code in some meaningful way before it reaches the consumer.

All code? I'm pretty sure you just ground the industry to an halt, if even Flappy Bird must be inspected and certified.

If it's just code used in critical situation, then you're not really talking about the same thing as pjmlp.

Self-driving car code is a critical situation, surely? Isn't that what we were talking about?

Ancestor in this thread said:

> I support software being regulated just like everything else, not only on life critical situations.

Maybe there is a line that is "critical", but not "life critical", but ancestor's statement was pretty damn broad.

The code is likely useful in many other ways than just self-driving cars.

This may very well be the future text-book example of how to not handle life critical systems and software, unfortunately it may well take people being killed. At this point we have no idea how this software handles pedestrians, cyclists, and even motorcyclists. We have no idea how it handles unanticipated situations like black ice or even construction zones. It has not been tested or certified by anyone. Clearly anybody who thinks they can download this, compile it and run it in their car likes playing with fire.

Sadly, despite 50 years of work on it (and there are still lot ot work on it), we have no idea on what to regulate. Which is the culprit here.

Yeah, I doubt that this code dump will stimulate a community to form in a legally safe way similar to the Linux kernel or GnuPGP, the choices of BSD and CC-ShareAlike-NonCommercial are likely to encourage manufacturers to pull a Sony and take the code, build their own training data and contribute nothing back.

Wouldn't it better if he pivoted to self-parking?

At $700 a kit that allows car to drive by itself to garrage, open it, park would be so attractive.

Driving through parking lot seems to be way easier than fully autonomous car.

You mean finding an empty spot and parking ? interesting small scope project.

Yes. Or even let me walk out of the car at front door and let it drive to garage spot.


+ parking is one of the least favorite part of driving. It also eats a lot of time.

+ low-speed makes it much easier, you can stop at any time. Also stopping and asking human for help would be acceptable.

+ 360 sensors are superior to human vision in parking situation.

+ parking accidents are usually minor. Self-driving car getting minor bump, scratch is much easier to swallow them someone getting killed, injured.

+ you can implement valet parking for free. E.g. Self-driving valet that gets 30% discount would be so much welcomed in San Francisco.

Also, if you're late to work and can't find a free parking spot, you can let the car circulating until it finds one, while you can head to work.

But this would add other problems to cope.

Parking can be fun sometimes, but many time it's indeed stressful, you have to care not to touch other cars. And in streets avoid blocking traffic too long.

I don't think he was targeting full autonomy with initial release

I might be missing something, but most of the critical code seems to be binary. It only supports two car models. So unless you own those two and are willing to let no-warranty indie code which you cannot see drive you around, this is a solid no-no.

It would be interesting to have somebody who was knowledgable in the area of self driving cars to analyze his code. I mean it seems like he might benefit from a combination of learning a few things and some level of humility. I mean, Kalman filters, and even the algorithms used to lower the noise in images and radars were not invented overnight but when it comes to a car that may easily kill somebody it seems like it's worth it to be a bit cautious about learning about potential pitfalls to different algorithms and technologies. I mean build all the self driving car algorithms and cars that drive on test tracks, but if one of his cars ran over my kid, or anybody else's kid, then the price of progress is too high.

Serious question. He seems to lack tact, saying Ford never did anything remotely useful in computer science. Why would his investors let him piss off their potential acquirers that way?

He basks in controversy. It greatly detracts from what he is hoping to achieve- automate humans traveling in large metal contraptions at 70mph. This is not an area for inflated egos.

"Move fast, break things"; eventual body count not withstanding, this is literally silicon valley's ethos for the last 15ish years expressed via temper tantrum.

I hope he goes into space, I even have a name for his next company: astron'hotz.

While possibly too political to discuss on HN, Hotz is a Brand. His claim to fame was rooting an iPhone and later showing the world why we can't have nice things (showing that OtherOS was a security hole and making Sony disable it)

This was a continuation of that. He was hacking the future and all that crap to raise money. Once he and his backers realized there were actually regulatory boards for this (Oh no! Politics!) they had to pull out immediately. So they "fought the system" and open sourced it (not really) to show that he is A Man of The People.

And most people will never care to look at the source and will (hopefully) think he invented Teslas or whatever the hell.

And in a year or two, he and many of the same backers will innovate and revolutionize a new field have a new product that they can hopefully get bought out on before that field's regulatory board cares. And the buyer will know it is snake oil but will now have a reputation and a Brand to increase the value of the actual product they had been working on for years.

Eh the "nice things" logic is so irrelevant here, I hope nobody expected this to be true with vehicles. Apple locked ecosystem has nothing to do with ... well gravity.

People either don't know or forgot that not long ago DARPA still held an open challenge to drive alone in the "desert".

The structure of my post could probably use some work. The first paragraph is background on Hotz to give context. The second was what he did at comma and why it was pseudo-open sourced. The third was why it will be effective, and the fourth is how he and his backers will profit

Just in case, I didn't mean to attack you but react to potential society interpretation of Hotz persona and subtext.

Neither Marc Andreessen nor PG seem to be particularly tactful.

Sorry for them, but in that department they're way below GH.

The big car companies are acting like a forty-year-old who had a year's piano lessons when they were a kid but suddenly decides they really want to become a concert pianist, now, and earn big bucks from that. They shouldn't even want in the business. The false lure of getting in on high-tech IP and abusive lock-in has 'em dazzled. Releasing superior open code puts far more air up the automotive companies' skirts than stating the obvious reason why the auto cos' private code can't beat his open code into the ground. The sooner they realize they're chasing a dream, the better for his bottom line. A little shock treatment is in order, I think.

A somewhat critical point of view from a month ago on George Hotz, comma.ai, and engineering ethics for this category of software:


He says it's better than anything other than the Tesla, yet he ran away the moment NHTSA looked into his project. I've been around long enough to know when something smells like bs, looks like bs, and sounds like bs...it's bs.

This whole thing is ridiculous. It only works because it piggybacks on the hard work of engineers at major manufacturers who put front-facing radar into their vehicles, plus a little bit of image recognition based lane keeping running on a smartphone. It's not "self-driving". It's time to put this sort of thing into perspective: it's Yet Another Half-Baked Level 2 Automation System (YAHBL2AS ?).[1] The world doesn't need another one of those. There's nothing new here. For reference, there's a comment elsewhere in this very HN thread from someone who said they did image recognition based lane keeping as a university project, and they said they did it 10 years ago using Matlab.

If there is going to be a new AI "winter" (and I'm starting to think that's more and more likely every day), it will probably be the result of too much hype around overblown ego-driven claims such as this. Hotz is probably doing more harm than good at this point.

[1] http://www.sae.org/misc/pdfs/automated_driving.pdf

I agree with you in some aspects but in the project description it is not mentioned the "self-driving" part, and I quote: 'Currently it performs the functions of Adaptive Cruise Control (ACC) and Lane Keeping Assist System (LKAS)'. This I can believe is performed, at some degree.

Do you think the media can be overinflating the whole project?

This isn't the entire codebase, it's what they've stripped out and open sourced (and a critical chunk isn't even OSS, it's a binary blob in the github repo).

Furthermore, ACC and LKAS are standard features provided by Honda/Acura with the required feature package needed to run this code! Neo will provide more functionality, but as of now, NeoOS hasn't been open sourced at all and is just a binary Android image.

It's not just the media though, he has a history of drama and overplaying things and is repeating the pattern here. I don't want to downplay his achievements, he's done a lot, but the hype is far greater than even that high bar of accomplishments.

Hotz described his product as 'fancy cruise control'. He actually doesn't play it up, and there are any number of fly-by-night autonomous driving startups working on rudimentary autonomous driving systems on next to no budget, but of course it's Hotz getting all the attention.

I'm seeing a lot of internet commenters reacting emotionally to Comma.ai, but the only thing they're getting upset about is their own inflated expectations. The actual product is too niche, too esoteric to warrant a strong reaction. It's a toy for nerds to tinker with, nothing more.

He claims to be on par with Tesla:

> It’s about on par with Tesla’s Autopilot 7, the one that came out when it launched.

That's not playing it up to you?

He's using a car that already has ACC and LKAS functionality to provide exactly that, yet scorns Tesla for using Mobileeye.

I'm not sure which part of your claim is in opposition to my own, but you seem to be disagreeing with something I said.

> He actually doesn't play it up

That part... ;)

Now I get the whole picture, thanks for the clarification.

The reality is that Adapative Cruise Control and Lane assist is good enough for 90% of driving.

Let computers handle the boring parts of driving, and send the hard stuff over to the human when it needs to.

That is a lot of gains that we could be running right NOW on the streets.

That's entirely true, but it's also problematic because although you have "computers handling the boring part" you still need constant human supervision.

If the human driver mentally switches off or stops paying attention, things get dangerous as has been demonstrated in numerous Tesla autopilot crashes.

That isn't to say that these features don't improve safety. Widespread introduction of radar-based automatic emergency braking (AEB) will significantly reduce crashes and save many lives. But until we're closer to 100% autonomy, features that reduce driver workload need to be implemented carefully.

What exactly will humans be doing while the computer handles the boring parts of driving? If the answer is 'supervising', that sounds even more boring than actually driving.

Right - they will not be supervising or paying attention or ready to take over in an emergency. Software that is designed with a dependency on drivers doing any of the above is an accident on its way to happen. The usefulness of certain systems that do not implicitly make use of human attentiveness cannot be assumed by those that do.

They should be supervising, but I imagine they will be staring at their phones. That said, there are 4-5 hour halls down I-65 that I wouldn't mind downgrading to "supervising" :)

I'd be happy if this type of system would scan really aggressively to warn/avoid the deer barely visible off the road before it jumps out in front of me...

What car that comes fitted with the required radars/cameras doesn't have those features already though?

I agree with your point. By the way, I would like to point the "elephant in the room": what if after so many hype, heuristic/ad-hoc AI becomes able to beat expensive "deep learning" AI in most cases? In my opinion, that could bring next AI "winter". E.g. imagine a 5-10$ RISCV (or ARM/PIC32) 300MHz microcontroller with 4MB of embedded RAM/ROM beating 400-2000$ ASIC/FPGA stuff with teraflops and gigabytes of RAM in most AI problems with "good enough" results.

edit: grammar corrections

I'm not 100% clear what you mean by heuristic/ad-hoc. The first AI winter happened because researchers (e.g. Lenat) thought they could build true AI (consciousness) out of millions of rules.

If rules, guidelines, and heuristics can beat neural nets at driving (which they might, I agree with you on that) it won't start a new AI winter because it won't erase the other things that neural nets are good at (e.g. classification).

Sure, "real" AI is important. I meant that using massive resources for solving simple problems with expensive AI will drive to deception, as soon as someone shows equivalent results by using cheaper AI (e.g. minimax, pathfinding, linear optimization, expert systems, etc.). In my opinion, it is like hunting ducks with cannons when using a simple gun could be enough for it.

In my opinion, real advances apart, a large portion of the "new AI bussines" is just a way of getting money from investors telling following magic words: "deep learning", "big data", "jump the wagon before it gets too late", etc.

I think I know what you're trying to say, but it's really difficult to follow your writing. Just some friendly feedback.

> "it piggybacks on the hard work of engineers at major manufacturers"

What hard work? It's a COTS part.

Not to mention, front-facing radar, even combined with image recognition is woefully inadequate for true self-driving. (As opposed to, for example, Tesla's so called "Auto Pilot".

What do you need for "true self-driving" in your mind?

Cameras + radar is more than humans have at least, so I think that should be enough.

Solid state LIDARs will soon be cheap enough and a good complement of course but I find it hard to believe autonomous driving would be impossible without it.

You don't need LIDAR for self driving cars. You also don't need megawatts of power to play top-level Go: humans do it with a few tens of watts of energy. Yet Google needed ~megawatts of energy to run AlphaGo on their massive server farm.

Imagine two companies competing to win at Go, and one company had the attitude that megawatts of energy was not necessary, and another company threw whatever resources they could. The second company just played top-level Go this year. The first company is at least another 5-10 years away from it.

I'd also like to point out that humans do SLAM already (LIDAR solves SLAM). From a previous comment:

Humans implicitly perform SLAM (simulataneous localization and mapping). What do I mean? Look around your room. Close your eyes. Visualize the room. As a human, you've built a rough 3D model of the room. And if you keep your eyes open and walk through the room, that map is pretty fine-grained/detailed too and humans can keep track of where they are in the map.

LIDAR solves SLAM? LIDAR is a sensor you can use as part of a SLAM system.

Yes. The problem of SLAM from LIDAR, GPS / IMU data (which are cheap sensors), and pre-recorded maps was solved years ago.

In my opinion, a system that combines LIDAR, GPS, and a pre-recorded map is not really doing SLAM. This scenario does not suffer from the "chicken and egg" problem which is a defining characteristic of SLAM. GPS and the pre-recorded map both make the problem much, much easier than SLAM in a GPS-denied, unmapped area.

Maybe we are just arguing over terminology.

We are. SLAM, LAM, same thing as far as building a 3d model of your environment is concerned.

You can do SLAM with only cameras ...

No you can't. Have you tried it?

Firstly, the state of the art (which is not deep learning based) still has trouble when the camera doesn't turn/swivel a lot (e.g. it has trouble when the camera is just moving straight).

And that's not even getting to the most critical assumption. All of visual SLAM right now assumes the environment is static (!!!), and doesn't work when things in the environment are allowed to move.

> Cameras + radar is more than humans have at least

execpting perhaps, that kilogram lump of gray matter floating around in our skulls.

It appears, from the article, that they're working on a shoestring budget. My initial impression was that adding at least one full-time specialist, maybe more, simply to dot the i's, cross the t's for NHTSA, plus purchasing (or renting) infrastructure to do all the required tests could probably easily bankrupt the project in its current state.

My guess is sometimes it's probably ok to decide a project is not yet in a position (financially or structurally) to take on the additional responsibility of dealing with an enormous bureaucracy such as the US govt. And if that means you have to give your product away in order to do so, that's a calculated risk you might feel you have to take.

I seriously doubt that when NHTSA contacted them they were like, "just invite us into your building and show us around", like a health code inspector might.

...and in addition, the attitude evident in the closed issues is more like -- "Here community clean this mess up for us, just don't ask us to give you anything functional, we have a company to run, you see -- at least this bit is free" ...meh, pass.

I could imagine NHTSA was pressing him too early for his taste. I don't know Hotz personally, so I can just guess his motives, but if it was me I might react in the same way. After all, this is basically still a hobby project (although announced like a product and made by a very clever guy). Who knows if he even has the money and time to implement NHTSA's demands?

The NHTSA seemed to only request for comma.ai to push back a release date until further testing happened.

As for the money

>Well, we have Andreesen Horowitz [the VC firm], which provided $3.1 million, and we spent under $1 million to do all of this.

Whoa, I didn't know about that money. I wonder if the investors aren't angry, VC money doesn't come without obligations. I could imagine the technology ends up in some kind of portfolio and gets reused by somebody else later.

OTOH, this still seems a bit like a hobby project to me. If I can believe the other comments, the code is not too spectacular, and the biggest asset is likely Hotz' name.

I am interested where's the profit behind this funding, if it's given away?

Or the funding is just to buy the person/people/team to make some innovative products or work in some other self driving car research teams?

The experience won while writing that code could be a lot more valuable than the code itself. If ever a car company was interested in building on top of what comma.ai has done so far, they would hire (or acqui-hire, that's the VC bet) the heads and have them rewrite the code together with more formal engineers.

But when nobody is buying comma.ai, nothing of that will happen. The (little, given the domain) money invested will only have been sponsorship of a romanticized take on hacker culture. Which may actually have been a contributing factor in the investment decision here, this a frugal garage operation, not a fast-burning startup with fancy offices.

Open sourcing some code can create visibility, without it we would not be talking about this today. From the investment perspective, opening this code is clearly a hail-mary-pass. There are situations when those are the best course of action and I think this is one of them.

It doesn't seem like a hobby project given he accepted VC funding for it.

They had intended to start shipping units by the end of this year. If the NHTSA was pressing him too early, I can't imagine what would have been the appropriate time.

The NHTSA should not be guided by Hotz' taste in this matter.

Do you see anything wrong with a hobby project steering cars around at high speed? Hotz is clearly a very clever guy, but his judgement is the issue here.

Yeah...an hobby project with 3.1M $ in funding. To be honest though I've looked into the github repo and from what I've seen I don't really think that's enough to get that amount of money.

> yet he ran away the moment NHTSA looked into his project

To be fair, this could have nothing to do with the quality of the project. I don't want anything to do with the NHTSA (or any other government regulatory body) if I can help it.

>I don't want anything to do with the NHTSA (or any other government regulatory body) if I can help it.

Great, so don't make software for self-driving cars.

Well, I would just do what this guy is doing and make the software available for free. This is preferable to never making the software at all. If the NHTSA manages to put a chilling effect on open AI research, that's a great example of government regulation indirectly hurting a lot more than it helps.

> To be fair, this could have nothing to do with the quality of the project

To be fair, looking at the code, it has all to do with the quality of the project

Do you happen to know where the code is? Can you post a link?

doesn't look to bad except: - I can't find automated tests - The vision module is a binary blob (of course this was also stated in the previous HN discussion)

The rest is basically just a bunch of glue and bootloader code.

Ninjaed by LukeB

Here there is a discussion about the github repo


So you're saying he may have been convinced he could make a self-driving car in the US and avoid dealing with the regulatory bodies?

Sounds like your opinion of him is even worse than the above.

By bs, do you mean poor practices like the one in throttle control system for Toyota Camry (which NHTSA/NASA failed to detect)?

http://www.embedded.com/electronics-news/4423365/Toyota-Camr... http://www.edn.com/design/automotive/4423428/Toyota-s-killer...

FYI: Previous discussion on the Toyota issue: https://news.ycombinator.com/item?id=6636811

If accurate this would be evidence that the NHTSA isn't very stringent, which would actually make Hotz's self driving project seem even worse.

The poor practices at Toyota are true (as in "established as fact in a court"). The NHTSA are indeed not very stringent as they don't really pro-actively audit systems, they rely on compliance documentation.

I believe this is how every regulatory agency works over the world, they don't do any tests on your products, you only submit documentation related to your development and tests.

They are basically looking for proof that you did required things such as specifications, tests, risks management... They are looking into your development process to estimate if your product is safe or not. But this is only indirect.

Until you actually kill someone, and then, the same agency will send you its investigators and auditors.

I thought FCC approval required lab testing by an approved lab, while for CE self-certification is acceptable?

UL actually do testing, although they're not exactly a regulatory agency.

A regulatory agency can ask for certifications, but it is possible that the certification authority is an independent entity (which probably need some sort of approval or audit from the regulatory agency).

I remember when we did lane image recognition in Matlab on uni, I felt like a master hacker. This was like 10 years ago.

Sure this is more advanced tech, but seriously, we should be far far further in the development of these.

I still have my tinfoil-hat hope that all these technologies are ready made and resting in a drawer, and companies are just milking the fossil fuel era as long as they can, and teasing us with this crappy progression meanwhile.

There is no connection between electric engines and self-driving.

Except it's easier to control an all-electric vehicle with constant torque, no transmission, etc.

How would self-driving detract from fossil fuels? It should increase total demand for fuel.

Self-driving tech would make vehicle sharing much easier, which would in turn allow people to use vehicles properly sized for their current need.

In other words, people wouldn't need a big honkin' SUV or truck most of the time, and could ride around in something smaller with just the fuel range they need for that trip.

Also, traffic might get better. If everything became automated it might even be possible to eliminate traffic lights.

There would likely be vehicle sharing benefits, but self driving cars are apt to make traffic worse. Your average 8 lane freeway will still only be able to carry 11k cars an hour assuming everyone has minimal stopping distance & is driving at 40mph for ideal throughput.

Traffic lights are also unlikely to disappear, as self driving cars do not magically get perfect traction or stopping abilities, and they will have to share the road with the other humans that exist on our streets, whether they be pedestrians, bikers, joggers, skateboarders, etc.

In contrast, a light rail alignment can move 20k people an hour. It also won't reduce traffic, but it'll slow traffic growth by reducing latent transport demand and allow cities to remain economically viable as traffic increases.

Even with a light, the self-driving vehicles could use precise knowledge of the signal's timing to coast towards the intersection at just the right speed so that they don't have to fully stop and waste momentum.

The notion of traffic will probably shift if energy becomes cheap and navigation doesn't require human attention. You don't need shortest path, and can redirect flow freely in smaller places. Unless you're in a hurry.

Unlikely, at peak times most alternate routes are already at or above capacity, energy & navigation dropping in cost will not make more road capacity suddenly appear, and the point of diminishing returns with road capacity is very low. When you try to ignore that you end up like Atlanta, which is no joy.

The OP has it wrong. It wouldn't decrease fossil fuel use. But what it WOULD do is massively decrease the amount of cars that the world needs.

The average car spend 98% of the time unused, sitting in a garage or a parking lot. That is a lot of money that the car companies will lose when this technology hits mainstream.

Yes. And it would also massively decrease the number of parking spots that the world needs.

That is a lot of money that cities lose when this technology hits mainstream :)

There are countless auto manufacturers around the world. I was surprised to see the length of the list:


I'm sure some percentage of these are willing to move faster than the major brands we know in the US. This strategy by Cruise could help then reach these smaller companies, some of whom surely have more flexible regulatory environments. Especially for local companies.

FYI that list also contains manufacturers that went out of business a long time ago. The first one I looked at immediately said "The company [Minerva] became defunct in 1956". For some countries this is explicitly mentioned with a "defunct" headers, but not everywhere.

In which countries is it legal to install a DIY actuator system like this one?

He is not releasing visiond binary, the main part of the project though. Can't say with certainty buy I don't doubt it was a little of a publicity stunt

> not releasing visiond binary

Nit: he did release the binary[0] he just didn't release its source, and there's a README that mentions this:

> Contact us if you'd like features added or support for your platform.

But I totally agree that it is pretty fishy to provide the source, especially in an "open-source" project...

[0] https://github.com/commaai/openpilot/blob/master/selfdrive/v...

[1] https://github.com/commaai/openpilot/tree/master/selfdrive/v...

yeah, the source, my bad. The blob is available

Am I missing something? I only see the dataset available but no source code.

As typical for this type of article, it's poorly linked.

Here's the code: https://github.com/commaai/openpilot

Wow, this codebase is alarmingly bad. No tests. Comments like "TODO: actually make this work" all over the place. I can see why geohot is terrified of regulators.

Apropos, https://github.com/commaai/openpilot/issues/5

bneiluj > There is no test suite. Unit tests or E2E tests.

Geohotz > Pull request please!

mgraczyk > We have unit tests which have not yet been open sourced, because we haven't yet set up automatic testing on Github. [...]


In what world is it acceptable to elide the test suite for your product because you don't have automatic testing setup on your repository host? Are your tests in a completely separate repo... (checks the repo history) no, you've just dumped a snapshot of your development in a git repo and pushed it to github.

I really don't want to sound snide... this project has the capability to be so cool. It's just... well, if someone hands you what they say is a magic box that you can trust your life to and that box is manky and leaks strange fluids all over the shop (and it really isn't supposed to do that)... what would you think? Is that box magic? Will that box save your life?

It certainly doesn't give the engineer in me great confidence in the quality of their system if this is quality of the piece they're proud enough to show everyone.

Seems the most important part is not open-sourced:


Which makes sense. He mentions of giving away software, while providing network services for profit.

Which means his models (aka core) will stay proprietary - and will be delivered by the network.

I guess it's MaaS, whereas Tesla give's you free hardware and paid SaaS.

Looking at that github repo, its a handful of python scripts with around 30 odd commits? Am I missing something or is this really what is the core implementation of a complex thing such as a self driving car software?

Edit: Someone else in this thread commented that the repo linked in the article isn't the right one and apparently this is where the code resides https://github.com/commaai/openpilot

Previous discussions came to the same conclusion, with the added realization that Python has no hope of maintaining the consistent latency required by a hard-real-time task such as driving a car.

I was able to run walking robot control software in Python at 100 Hz hardly ever missing a tick. Because it uses reference counting for most garbage collection, Python performance is very consistent. Still, I wouldn't let it drive my car.

I don't know how much bandwidth and complexity is involved in processing the single cam setup, maybe with a recent core i7, the variability of python is swallowed by the cpu performance.

That said, Hotz claimed to have read state of the art papers, but it doesn't seem he meant hard real time physics/mechanics computations; probably neural net, computer vision and deep learning. Hacker way ?

I don't know how much bandwidth and complexity is involved

It's not about bandwidth or complexity, it's about latency. And no, even a recent core i7 can be stalled for long enough to cause underruns in a simple audio stream.

real-time is about maximum-bounded response time.... not performance.

an i7 in this case will be significantly worse than a Cortex-M0. Entirely due to the features in the i7 that make it high performance, such as instruction reordering, branch prediction, etc. All those types of features make the performance less deterministic.

Python isn't the only code, there's also a bunch in C too, but that's mostly only provided in binary form.

> Someone else in this thread commented that the repo linked in the article isn't the right one and apparently this is where the code resides https://github.com/commaai/openpilot

Hmm. Sorry, but no.

I am not going to let my car being driven by dynamically typed code without any tests.

Frankly, it looks like a hobby project that has grown a bit messy over time.

>I am not going to let my car being driven by dynamically typed code without any tests.

How are you going to enforce that? Make Tesla show you their source?

How are you not already enforcing that? Make the existing automobile makers show their source for the plethora of computer driven systems!

Even if they don't show unit tests they will at least have done a ton of manual testing and a bunch of tests for regulators. This has none of that.

>they will at least have done a ton of manual testing

That is a very quantifiable amount that I totally understand.

>bunch of tests for regulators

Oh and how many of our emissions testings have been gamed by at least one company?

In this case no one has verified it and it is obviously kind of a shoddy product (e.g. it has no unit tests) so I won't be satisfied until there has been some testing on it. I don't have a very specific amount that it needs to be tested and any number I put on it would be completely arbitrary anyway. Overall I trust regulators, the problem was the company specifically dodging regulations and it was in a way that wasn't apparent to the consumer so this is a different prospect and any issues would be immediately apparent to end consumers. This would make it so if you tried to game the system people would figure out pretty quickly.

comma.ai has other previous repos, maybe openpilot is just a release repo, which explains the low commit count.

Also they rely on python neuralnetwork libs which lowers the line count, probably tremendously, with the right abstraction already done you can go pretty far without a lot of lines (think yacc).

I just cloned it, it's about 40,000 lines of C code. More than a handful.

I guess that the hard real-time stuff is handled by a microcontroller board.

If not enough for that, at least enough to get $3.1 million in funding

What surprises me about the repo is that there is no commit history before making it public. Did they manually copy the code (why should they?) or not use a VCS at all before (can't imagine this)?

It's common to sanitize a commit history before making code public. Old commits can contain potentially sensitive information, like certs committed by mistake and later removed, db dumps used for testing that contain unsanitized data, commit messages could be some value of unprofessional that you don't want to share with the world, that kind of thing.

I find its absence much more unprofessional.

Commits are an essential ingredient in a modern codebase to describe why each line of the code is there.

I don't want to go back to blocks being commented out and code intertwined with comments like /* 6-dec-2016 tomtomtom: moved from other_function() to fix latency */

Can you point to one such project which is not planned from the beginning to be open sourced and then later the entire commit history is made public.

IIRC, the Java source code repositories included pre-public commit history (you had to accept a license to access them).

Torn. Concerned about safety, but wanting this to be open source and not vendor locked in...

it only works on 2 cars.. that have a tech package. Might as well buy it from the vendor

Vendors who will use his tech, now, 'cause it's free and better than what they could craft. His point is to offer his services to vendors. What's available now amounts to a demo.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact