Hacker News new | past | comments | ask | show | jobs | submit login

The slow progress is not an indictment of self driving. This is one of the toughest engineering challenges ever mounted by humanity.



Yeah, that's a bit of an overstatement.

We created nukes, landed on the moon, took sludge out of the ground and used it to power the world. We connected this world with wires and glass fibers to build a real-time global communication system that also gathered all of the world's information in a singular and immediately accessible place.

Building a self driving car is hard but really not as tough.

How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).


Depends what you mean by "self driving car". IMO, building a fully self driving car requires artificial general intelligence, which is so hard I'm not sure humanity will ever achieve it. If we ever manage to create the kind of AI that fully self driving cars require, self driving cars will be one of the most boring and trivial things that is done with that AI.


My guess is that it (full 100% self driving) has around the same technological difficulty as putting a man on Mars. It also depends on what constitutes "100%" of course.

> How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).

At that cost we could adapt all infrastructure to suit self driving cars, instead of developing self driving cars to adapt to human infrastructure. But I think that kind of cost is always going to be beyond what's acceptable.

I think the discussion is mostly pointless because of diminishing returns: if you can have "99.9% full self driving" for a tiny fraction of the cost, who would want to pay to go from 99.9% to 100%?

Initially human remote drivers will take care of the rest. And then there is a very slow commercial race towards using fewer humans that drives the very slow march to 99.9% and 99.99% self driving and so on. Driving the last second of the last edge case route is basically something that requires AGI (as long as we don't adapt infrastructure).


I agree with most of this, but remote drivers are completely unfeasible with current or planned network infrastructure. You could drive with acceptable levels of sensor bandwidth and latency right next to an unobstructed low-utilization 5G tower, and that's about it. That's unlikely to correspond with the locations one would need remote drivers.


Luckily the hardest situations at least occur in cities and not on highways, and cities have good broadband. But yes, it's still "100%, but only here" a.k.a. not 100%

I think remote drivers will probably have to "rescue" cars without piloting them, usually just assessing a situation and overriding something (driving through an obstacle etc). A passenger (if there is one) could do the same. But sometimes actual remote driving would be required of course.


We won’t know how tough it is until it gets done once.

I don’t think we’ve demonstrated anything autonomous beyond the most trivial kinds of autonomy (e.g. the V2) in all of our technology history.


All great achievements, but none of these is in the same ballpark of difficulty as creating a general artificial intelligence (which is pretty much what self driving cars need to achieve true level 5 autonomy).


Level 5 driving doesn't require AGI at all.

It's an extremely narrow set of problems that have to be solved incredibly well. It mostly just comes down to creating an accurate 3d representation of the world from a bunch of sensors. You also have to correctly segment and label each object in that 3d representation. If you did those two things extremely well, the actual driving logic can be hardcoded.

The problem is that each of these systems has problems so they all have to improve and compensate for each other.


This is incorrect. The hardest part of developing a self-driving car is predicting the world around you in the immediate future. Knowing whether or not that object is a person is a lot easier than guessing whether or not that person is going to jump out in front of the car 1 second into the future. You have to know who is going to run stop signs, when cyclists are about to cut you off, when someone is about to back up into a parking spot.

I don't know whether or not AGI needs to be developed to make a useful self-driving car, but as time goes on I'm beginning to believe that's the case.


This is incorrect.

Predicting motion once you have small time slices and very accurate 3d representations is very very easy. You can easily calculate expected paths. You have to remember that computers see the entire situation at the same time. A bike doesn't just cut off a self-driving car the same way it does for a human. Humans are slow, our increments of time are large and in the hundreds of milliseconds and we can only focus on a couple of things at a time. A computer will notice the slight change in velocity and acceleration within single-digit milliseconds. Then it just has to predict the probability of collision. These calculations are simple.

Deciding what to do in these situations can very much be efficiently hardcoded using decision trees. No one right now working on self-driving cars dares to use a neural network or any other unexplainable & unbounded ml algorithm for policy. You have to be able to hard code in new edge cases as they emerge. You have to be able to study specific crashes or incidents and then adjust the decision-making scheme to specifically avoid that situation in the future.

Truly, the hardest problem is taking in data from multiple sensors, segmenting it, and then labeling it. All in real-time. The sensors are faulty and super expensive. There are also so many different objects out there. If you actually look at the ancillary startups in this industry. They're not working on "common-sense" general intelligence algorithms. They're working to make better & cheaper lidar. They're working on computer vision problems. They're working on image segmentation.


You're focusing on the wrong part of the problem. You're thinking of everything as a giant physics simulation, and completely ignoring the hardest part: humans.

Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision. The problem is the robot didn't anticipate that the pedestrian was going to cross despite not having the right of way. Humans are better at reading social cues than robots. Maybe robots can learn that, but it's a significantly harder problem than path planning and image segmentation. This applies further than pedestrians and also drivers and predicting their behaviors on the road. And if you try to drive cautiously to avoid this potential scenario, you effectively stop and crawl every time you see a pedestrian and are not very useful for moving from point A to point B (not to mention all the pissed off traffic behind you).

The reason it's difficult is because it's an uncontrolled environment, and the robot has to be able to anticipate what other drivers/cyclists/pedestrians will do. Robots have done wonders in controlled environments, but trying to bring them to the real world has always been a struggle.


I doubt that most human drivers are good enough to avoid a collision in that situation. You will always be able to come up with a scenario that will fool a computer; you can also always come up with a scenario that will fool a human driver.

The standard isn't "perfect under all conditions", it's "better than a human". Humans are, honestly, pretty bad at driving. The bar is not that high, perhaps unfortunately.


> Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision.

Why does a robot driver need to anticipate this? Does a human driver need to?


Er, yes? Remind me not to walk close to your car :P


As both a pedestrian and a driver, I certainly have to read social cues.

If I'm walking up to a pedestrian crossing and a car is approaching, I don't just step out into the road, even though I have the right of way. I try to make eye contact with the driver to see if they recognize I'm crossing. They'll often nod or do something similar to signal that they're letting me cross.

A machine has to understand these social cues as well. It might even be helpful if the machine has a way to signal its intentions back to pedestrians.


You can probabilistically predict those events with machines much better than humans can. You don't really know someone will decide to run a stop sign, but you do know when the vehicle is past the point it was supposed to start slowing down. That's relatively "easy", we have been predicting physical object motions with analog computers, even. As the parent says, accurate data from sensors is a much bigger problem. But once you have the data, you can model these objects with dumb algorithms.

Computers can also have a much faster reaction time, so a human may need to predict one second ahead, but computers may be able to get away with less.


> You can probabilistically predict those events with machines much better than humans can.

This is an assumption and has not been shown to be correct or even probable


I question the claim that you need to predict the immediate future. Human reaction times are pretty slow, to the point where when our feet hit a pedal in response to light that hit our eyes two seconds ago, something that is prediction for us could be reaction for a machine. A human has to live two seconds in the future because our appendages and lower faculties are lagging behind in the past.


> I question the claim that you need to predict the immediate future.

Alternatively you could develop braking technology which gives vehicles a stopping distance of 0m, but this might be a bigger technological advance than full self-driving AI, and I'm not sure it would be that comfortable for the passengers....


For me, the definite proof that we won't have self-driving anytime soon is the massive fail that was encountered in the recent chatbot fad.

One key part of driving is communicating - with pedestrians, cyclists, other drivers. This happens through body language and other fairly subtle cues.

When you can't make AI work for responding to questions given in text form on an extremely limited problem domain, how on earth would it work for something that's orders of magnitude less well defined and more broad?


"If you did those two things extremely well, the actual driving logic can be hardcoded."

I mean, it _could_ be hardcoded, but there's millions of edge cases so it's pretty infeasible. I agree with parent comment that full level 5 requires something close to AGI - the difficult part in in getting a self driving car is giving the AI something along the lines of "common sense", the ability to reason about what to do in an unfamiliar situation.


Let's say you do sense and label the entire world (which in itself is an impossible problem). Do you think every single action that a driver takes on the road can be hardcoded?

What happens when a street is temporarily closed but doesn't have the correct signage? What if there's a police office or road worker signalling instructions by waving his hands? What if the lights at an intersection stop working? What happens if there's a car burning on the side of the highway and drivers need to change lanes to go around it?

And these are just some of the problems in a large American city. Think about rural areas, places with more aggressive traffic, places with wildly different written and unwritten traffic rules.


> Level 5 driving doesn't require AGI at all. > It's an extremely narrow set of problems that have to be solved incredibly well.

It does require AGI, at least if you're planning to drive on most of the world's roads, and not only on some "pampered" streets in the middle of the desert or on heavily-regulated and very well maintained streets like in Norway or Switzerland.

As a human a I have a quite "accurate 3d representation of the world" but even so, many times I'm left dumb-founded by what the people driving on the same streets as me are doing. And even if you do manage to replace all those other people with self-driven cars, how do you account of cows ending up in the middle of an interstate road (it happened to me at least once), with wrong street markings or no street markings at all or with drunken bicyclists whom you can't see at night?


It's way tougher.


Waymo has a far better safety records than NASA, and it self driving cars. NASA is allowed to kill people. Waymo isn't. The Internet is allowed to be full of malware. Waymo isn't.


I think it parallels nuclear fusion in a lot of ways. The impact is unquestionable, but the feasibility is an ongoing debate.


My peeve is not the slow progress: My peeve with self-driving companies, Waymo and Tesla alike, is their constant misrepresentation of their capabilities and their timelines for the benefit of stock value and public opinion. The technology doesn't really work, and Google's marketing for the past five years has claimed regulations were the only thing holding them back.


Is the market really being fooled? Tesla's stock price falls pretty often, and in a universe where the hype was real it would be a lot higher.


In all fairness, it seems as if a lot of the companies and individuals on Team Right-around-the-Corner have really backed off in the past year or so. I'm not sure how much of this is the deflating of unrealistic expectations and how much is just a tacit agreement to stop trying to top each other given how many hurdles remain.


citation? This feels like you swapped google for tesla (Elon literally said that, i can't find anything that says Google has said it)


I mean, off-hand, this paywalled article from 2016: https://www.barrons.com/articles/googles-self-driving-cars-f... The search result for it had the text "Google is so confident about its technology that the Internet search giant has already agreed to accept liability if its self-driving cars cause an accident."

Here's someone following right along with the suggestion that regulations, not lack of technology is to blame: https://www.wired.com/story/outdated-auto-safety-regulations... The author, part of the Competitive Enterprise Institute, works for Google: https://services.google.com/fh/files/misc/trade_association_...

That was page one of my search results, but suffice to say Google's been insinuating this for a while, both from the Chris Urmson era and the John Krafcik one.

I specifically referred to Tesla in my post as well. I saw the suggestion that Elon's claims about release dates for Autopilot features were effectively timed to manipulate the market. I'd give credit to that theory, or that Elon just has no clue how far he actually is from success. One of the two.

Both companies horribly misrepresent the fact that self-driving isn't around the corner.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: