Hacker News new | past | comments | ask | show | jobs | submit login

> Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

Still don't see fully (fully automated) self driving cars happening any time soon:

1) Heavy steel boxes running at high speed in built up areas will be the very last thing that we trust to robots. There are so many other things that will be automated first. Its reasonable to assume that we will see fully automated trains before fully automated cars.

2) Although a lot is being made of the incremental improvements to self-driving software, there is a lot of research about the danger of part-time autopilot. Autopilot in aircraft generally works well until it encounters an emergency, in which case a pilot has to go from daydreaming/eating/doing-something-else to dealing with catastrophy in a matter of microseconds. Full automation or no automation is often safer.

3) The unresolved/unresolvable issue of liability in an accident: is it the owner or the AI who is at fault.

4) The various "easy" problems that remain somewhat hard for driving AI to solve in a consistent way. Large stationary objects on motorways, small kids running into the road, cyclists, etc.

5) The legislative issues: at some point legislators have to say "self driving cars are now allowed", and create good governance around this. The general non-car-buying public has to get on board. These are non-trivial issues.






You could be right.

My alternative possible timeline interpretation is that two forces collide and make self-driving inevitable.

The first force is the insurance industry. It's really hard to argue that humans are more fallible than even today's self-driving setups, and at some point the underwriters will take note and start premium-blasting human drivers into the history books.

The second force is the power of numbers; as more and more self-driving cars come online, it becomes more and more practical to connect them together into a giant mesh network that can cooperate to share the roads and alert each other to dangers. Today's self-driving cars are cowboy loners that don't play well with others. This will evolve, especially with the 5G rollout.


This reminds me that Tesla itself is starting to offer insurance, and it can do so at a much lower rate. I assume this is because:

1) Teslas crash much less often, mostly due to autopilot.

2) Tesla can harvest an incredible amount of data from one of their cars and so they can calculate risk better


how much does a Tesla know about the state of its driver, e.g. to detect distraction, tiredness or intoxication?

Does Tesla see when you speed and increase your premiums?


Having high-speed steel boxes carrying human lives and what else react on messages from untrusted sources. Hmm. What could go wrong?

I'm going to ignore the snark and pretend as though this is a good faith argument, because we're on Hacker News - and I believe that means you're a smart person I might disagree with, and I'm challenging you.

I want to understand why being in a high-speed steel/plastic box with humans (overrated in some views) controlled by a computer scares you so much. Is it primal or are you working off data I do not have? Please share. I am being 100% sincere - I need to understand your perspective.

To re-state in brief: (individual) autonomous self-driving tech today tests "as safe as" ranging to "2-10x safer" than a typical human driver. This statistic will likely improve reliably over the next 5-10 years.

However, I am talking about an entire societal mesh network infrastructure of cars, communicating in real-time with each other and making decisions as a hive. As the ratio flips quickly from humans to machines, I absolutely belive that you would have to be quantifiably unsane to want to continue endangering the lives of yourself, your loved ones and the people in your community by continuing to believe that you have more eyes, better reaction and can see further ahead than a mesh of AIs that are constantly improving.

So yeah... I don't understand your skepticism. Help me.


The risk is a bad actor could hack into this network and control the cars

Security-minded thinking dictates that we should move forward with the assumption that it will happen. The important outcome is not "we can't do anything as a society because bad men could hurt us" but "how do we mitigate and minimize this kind of event so that progress can continue".

Look: I don't want my loved ones in the car that gets hacked, and I'm not volunteering yours, either. Sad things are sad, but progress is inevitable and I refuse to live in fear of something scary possibly happening.

It is with that logic that I can fly on planes, ride my bike, deposit my money in banks, have sex, try new foods and generally support Enlightenment ideals.

I would rather trust a mesh of cars than obsess over the interior design of a bunker.


Totally agree.

If all the cars in the area know one of the cars is about to do something and can adjust accordingly then it will be so much safer than what we have now it is almost unimaginable.

It would seem at some point in the future, people are not going to even want to be on the road with a human driver who is not part of the network.


The hype around self driving cars is still very much around. I tend to view any debate about full autonomous cars (level 5) as unserious if they work with less than a 15 - 20 year time horizon.

In 2014 top humans could give a good Go playing AI 4 stones (a handicap that pushes games outside of being between comparable players).

In 2017 AlphaGo could probably give a world champion somewhere between 1 and 3 stones.

From an algorithmic perspective the range between "unacceptably bad" and superhuman doesn't have to be all that wide and it isn't exactly possible to judge until the benefit of hindsight is available and it is clear who had what technology available. 15-20 years is realistic because of the embarrassingly slow rate of progress by regulators, but we should all feel bad about that.

We should be swapping blobs of meat designed for a world of <10kmph for systems that are actually designed to move quickly safely. I've lost more friends to car accidents than any other cause - there needs to be some acknowledgment that humans are statistically unsafe drivers.


When you're mentioning AlphaGo, you're committing a fallacy that's so famous that it has a name and a wikipedia page (https://en.wikipedia.org/wiki/Moravec%27s_paradox). The things that are easy for humans are very different from those that are easy for robots.

I don’t disagree that computer are better driver, under certain conditions, but that’s not the point.

I can drive myself home relatively safely in conditions where the computer can’t even find the road. We’re still infinitly more flexible and adaptable than computers.

It will be at least 20 years before my car will drive me home on a leaf or snow covered road. Should I drive on those roads? Most likely not, but my brain, designed for <10 km/h speeds, will cope with the conditions in the vast majority of cases.



> Its reasonable to assume that we will see fully automated trains before fully automated cars.

https://en.wikipedia.org/wiki/Paris_M%C3%A9tro_Line_14

Fully automated since 1998, and very successful.


There were automated railways 30 years before that too. https://en.m.wikipedia.org/wiki/Automatic_train_operation

I've lived in Washington DC long enough to remember back when our subway was allowed to run in (mostly) automated mode. There was a deadly accident that wasn't directly the fault of the Automatic Train Control (the human operator hit the emergency brake as soon as she saw the parked train ahead of her) but it still casts light on some of the perils of automation.

Another hard problem for AI is to "see" through rain

That's hard for humans too. I think we need to give up on the idea that fully autonomous driving will be perfect.

I'm obviously talking about matching human performance and this is the hard problem

There is also an easy solution of just staying put.

I have driven in snow a few times that I was not sure I was even on the road. Or the only way I knew I was going the right direction was because I could vaguely see the break lights of the car going 15mph in front of me through snow.

That is an easy problem to solve though because I simply should not have been driving in that.



Humans are pretty terrible at driving in rain and snow as well.

We already have fully automated trains. The DLR in London.

I am optimistic about solving those problems. Regulation always comes after the tech is invented. Cars have more opportunity to fail gracefully in an emergency; pull off onto the shoulder and coast to a stop or bump into an inaminate object.


Of course the owner is to blame.

What if it's a rental or a lease? In a fully automated car, that's basically a taxi. I don't think I should bear the responsibility of my taxi driver.

If/when we get fully automated cars, this kind of driverless Uber will become extremely common. Who bears the risk then? This is a complicated situation that can't be boiled down to "Of course the owner is to blame"


That is the most puzzling thing to me. Not from a technical, but societal ( https://en.wikipedia.org/wiki/Tragedy_of_the_commons ) point of view. Compare with public mass transit, except of Singapore, Japan, it is mostly dirty, in spite of cleaning staff working hard, and other people around. In a Taxi/Uber you have the driver watching, and other rentals are usually inspected after, and immediately before next rent out, just to make sure.

Not so in car-sharing pools, and there it's already materializing as a problem. How do you solve that with your 'robo-cab'? Tapping on dirty/smelly in your app, send back to garage? What if you notice it only 5 minutes after you started the trip, already robo-riding along? What if you have allergies against something the former customer had on/around it? Or was so high on opioids, that even a touch of the skin could make you drop? As can, and did happen. How do you solve for that without massive privacy intrusions? Or will they be the "new normal" because of all that Covid-19 trace app crap?


Counterpoint: In a fully-autonomous situation, of course the AI is to blame.

I think we need to consider that case when/if it happens. For the foreseeable future there needs to be a responsible driver present.

To go contrary to this is to invite outright bans of the tech.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: