Hacker News new | past | comments | ask | show | jobs | submit login

> Thanks, this discussion is going well. I want to add that it wasn't nhtsa, it was Tesla's data which nhtsa cited without any fact checking. But since Tesla has been parroting nhtsa's citation, they have since then withdrawn their support for Tesla's data.

Yeah, I had read that previously. I understand NHTSA not wanting to be seen as a primary source of data that it did not collect, but I don't think Tesla handing over that data changes anything material here. I don't see how the source of the raw data is relevant to the report, barring a case of fraud or massive incompetence among those interpreting it.

> So, let me reiterate -- we agree that autosteer is lane keeping assist given more responsibility. If that's helpful to the driver or dangerous as it distracts him is a debate were should have with better data, which we don't have. Unfortunately, I personally don't trust Tesla giving data without any hidden unsaid caveats, given their behavior till date.

I hear you. I would distinguish though between Tesla citing data and crafting a story around it (e.g. what they did with the 1:340M miles figure), vs Tesla providing raw data upon request to a regulatory body like NHTSA or an advisory body like NTSB, and then having those bodies draw their own conclusions. So long as Tesla isn't committing fraud and the investigative methods are statistically sound, the source of the data shouldn't matter. I like this model because, by enabling Tesla to be the data source in a way that is trustworthy, we can get access to much more of it (and much more quickly) than if we had to know all possible queries in advance and force it to go through a third party. For all the recent riffraff between Tesla and NHTSA involving the recent accident, both parties have a long track record of saying they work well together.

> However, there is another topic we should talk about. Do you think improving autosteer lane detection with vision will make a car driverless capable? Because that's precisely Tesla markets their technology as, with coast to coast drive, or cross country summon, or driverless taxis. I personally don't believe a line following robot is enough for self driving. Infact, I like Google's approach more, where they drive only when the road is free of any object (3d mapped by lidar).

I agree that Tesla's neural network needs to do much more than line following, but I don't couple that with a LIDAR approach.

One reason I am so bullish on Tesla is that I think the conventional wisdom about self driving is grounded more in marketing and fundraising than in engineering. The great thing about LIDAR is that you can build a prototype-level self-driving car and put it on a real road relatively quickly. The problem with LIDAR is that its utility breaks down much faster than that of vision. If you want to drive in conditions where humans drive today -- heavy fog or rain, sleet, snow -- you can't rely on LIDAR; you have to do that with cameras and radar. Once you've achieved a system that can do that, LIDAR becomes largely redundant; in good conditions it's telling you what you already know, and in bad conditions it's telling you less than that. So I am a big fan of Tesla's philosophy here (and of Comma.ai's for the exact same reason). This isn't just projection, btw -- Nvidia has a neural network that's been outperforming humans at camera-based object detection in bad weather for over two years now. I'm not aware of anybody achieving that with LIDAR.

> Even then, I don't think that's enough either. For example, say a ball bounces on front of you. I stop expecting a child or a dog running after it. I think driving means interacting with the environment, and responding appropriately to it, and would require a general AI.

I think you're right that with self-driving cars, we will see new classes of accidents that we don't experience with human drivers. However, because humans are such terrible drivers in everyday situations, it is likely a huge net improvement to improve the general cases and have more failures around the edges. This is why examples like the beach ball, or even the Trolley Problem, don't move me. Those situations are exceedingly rare, and optimizing for them will only get us to a local maximum in the mission to save lives. The global maximum is to mitigate the consequences of e.g. texting and driving, or falling asleep at the wheel.

I largely agree with you, but disagree with the last part. The comparison shouldn't be just humans, but humans aided by the exact technology working as an assist.

So, I don't expect the huge improvement when going from lane keeping assist to autosteer.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact