Hacker News new | past | comments | ask | show | jobs | submit login
Volkswagen exec admits full self-driving cars 'may never happen' (thedrive.com)
486 points by stopads 32 days ago | hide | past | web | favorite | 897 comments



I agree in part in that I am skeptical that full self-driving cars will happen in the next few years, but he is completely wrong when it comes to the long term. Not only will the tech get as good as humans, but most forget to account for the fact that the environment will meet the cars part way. We will eventually update markings and beacons on the roads to make it easier for the cars, implement networks in which the cars can talk to each other, and make special lanes for self-driving cars only, among other improvements that will make it easier for the cars. Eventually non-self-driving cars will not be allowed on the road, and will be a niche hobby on race tracks.


We can't even bloody keep the yellow markings on the road visible, and "paint" is a technology we've had for thousands of years. Where in dog's name is the money for installing and maintaining all that smart infrastructure going to come from?


I am more fearful of trolls, tricking the technology. I doubt, that the software / hardware will have a common sense, like we humans have. Its this common sense that keeps us alive in unexpected scenarios, like when the paint on the road is missing.


What's stopping people from destroying yield signs and other safety critical road infrastructure now? Sometimes common sense helps, but it you're out to cause chaos, there's a ton of things you can do today that would cause crashes - but most of the time, people don't.


If you destroy a yield sign, maybe it will eventually cause some problems, or maybe not. We could imagine a world in which destroying smart infrastructure could pretty instantaneously and reliably cause massive traffic jams, which seems like it would appeal more to certain trolls and perhaps protestors.

We could also imagine a world in which more subtle attacks could reliably and fairly quickly cause injury/fatality accidents, which might appeal to terrorists.


I think this type of "stress testing" of destroyed or modified signage would be an obvious direction for AV development once the basics are down. With HD mapping and other technology used to augment what is gathered by their sensors, AVs are eventually overall going to be better at handling unexpected scenarios at the road compared to humans


You can cause horrible traffic jams now by throwing a few mattresses or a bunch of caltrops onto the highway at the right time of day.


Then imagine the jam flowing around the blockage without stop-starts due to the automated vehicles networking and managing traffic.


The future needs self-throwing mattresses.


If you destroy a yield sign, the system will immediately know because all the cars previously recorded it.


So presumably every self-driving vehicle is going to be constantly streaming all sorts of information in their purview to a central hub?

That is a privacy nightmare without regulation on scrubbing extraneous information, done before upload: for example blocking out of pedestrian data; not just faces but clothing and gait, and probably also any data captured through windows.

It would be ideal to have a limited view of just the roads and signage, and have a retention plan that gradually keeps less and less historical data.

For accident review more of the data might be required, so vehicles should keep the last 24 hours of raw data.


Self-driving vehicles should be implemented with the same care Apple has given Touch ID and Face ID in regards to protecting sensitive data.

Having a central database capable of being scraped and process to determine where any person is at any given time is a non-starter. Care needs to be taken to scrub all extraneous data from the fleet's network.


A lot of things should be implemented in a particular way, but most aren't. Decades of practical experience show that, unless forced by regulations, auto manufacturer will pick the cheaper and more profitable option over the safe option every single time.


If I had to choose between privacy of my location or the convenience of a self driving car, I’d choose the car. Every single time.

Convenience wins out over privacy for me. It’s the same for the billions that elect to use Facebook. I don’t care if Tesla’s knows I visited the supermarket.


I wish there was a way for individuals to make that decision for themselves alone.

As a pedestrian being observed by AVs, your [valid] choice has a significant impact on my ability to choose privacy over convenience.


Tell that to guy that had smartwatch and his wife noticed he had high heart rate somewhere at night where he should be doing something else.

I don't have link for the story. But idea behind this is not that one should not cheat on his wife because he will get caught. To be really a human and be honest, one must not have an urge to cheat on his wife(husband) because he loves her. If someone has dirty thoughts and only thing that is preventing a person from fulfilling that is that "Tesla will know"...

People should be able to cheat, they also should be caught, but being good only because you are constantly watched?

This argument looks like "because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say".



This argument is about that you have to make concessions - tradeoffs.

You have a choice to not use the autonomous car, to not use the apple watch, to choose a privacy-conscious internet carrier.


Real world trade offs please...

If there are only autonomous cars that track your position how are you going to make trade off?

Right now I don't have Facebook but no one in my family or friends is contacting me. Why? Because I don't have Facebook. This is my trade off. Real trade off would be if I could select different provider and still be in contact with my family and friends. Right now it is monopoly degrading my quality of life. If I would have different options that I could for example pay for but they would not use my data as tracking that would be different.

If there would be way to pick your autonomous car provider by price/privacy it can be trade off.

If it is one option that you can use or not use it is not about trade off. I am not native English speaker but trade off for me is when you have multiple options to choose. No that you have all or nothing....


Did you read the comment?

The problem is self-driving vehicles implemented with fleet data that vacuums up everything will spy on everyone, not just the driver.

They need to be implemented with on-board redaction.

Otherwise there is no choice for anyone, the companies or the ruling party can do facial or gait recognition on pedestrians and other drivers, and scrape license plates of of all parked and driving cars from the data stored in the cloud.

With just Lidar I imagine this is less of a problem, but at least pedestrians in that case will still need to be redacted for gait recognition.

My first comment wasn't about your regard to personal privacy, it was with regards to everyone's. One person's data is mostly worthless; everyone's is priceless.


Also FWIW Apple has a good track record with privacy, it's all opt-in.

Edit: Health data is also end-to-end encrypted (accessible by your devices only, Apple servers don't have keys, if you forget your passcode you lose the data); https://support.apple.com/en-ca/HT202303


>> If I had to choose between privacy...

That sounds like it's taken right out of an Orwell story.


You already make this tradeoff - you have a bank account instead of hiding your money in a mattress. You're probably browsing the web from a normal internet connection, not a VPN, or some alternative?


Yes, and I have some control over that which I exercise regularly. Not enough control, but what I have I use proactively.

I'm also active in trying to wrestle more control of the data being collected on me.

And I am not willing to give up my privacy for your convenience and not happy with those who'd willingly give up theirs in a way that sweeps mine up with it.


> That is a privacy nightmare

I don't want to disappoint, but the future will have no privacy whatsoever.


There will be privacy until the singularity, don't worry.

Continued encroachments on privacy and personal autonomy may indeed deteriorate them, even cripple our ability to protect those rights, but simply worrying about that doesn't mean I must go gentle into that good night.


> That is a privacy nightmare without regulation on scrubbing extraneous information, done before upload: for example blocking out of pedestrian data; not just faces but clothing and gait, and probably also any data captured through windows.

If you can be photographed legally in a public place, what's going to protect your privacy in public?


I feel like this is almost like trying to resist the allure of drones in war. It’s too easy to do and too hard to stop. It takes almost global consensus.

Any number of parties could lawfully and unobtrusively start slurping up all sorts of data which is in a kind of plain view, and nobody would really be the wiser.


> So presumably every self-driving vehicle is going to be constantly streaming all sorts of information in their purview to a central hub?

That's a terrible design.

The obvious way is that the sign/beacon itself pings a server every 5 minutes, and when that stops you know it's broken.


And it will play a helpful message like "Recalculating route: 3 self-driving fatalities registered at this location."


If there is a central database with all the signs, you don't need a physical sign on the edge on the road for AV to notice it, they already know it is "there" from that DB


Bear in mind that's not how the traffic law works anywhere right now. You are only required to obey signs and markings which are actually there. And traffic signs get removed all the time for works, I honestly can't imagine that every action like that would be recorded in some central database that all cars could query.


So what happens when a road sign legitimately changes? Does the system use the previous values recorded or the new value?

Im also curious how situations will be handled where there are two speed signs: the regular posted speed and a temporary speed for construction. How does it know which to obey.


The permanent signs would already be in the database. The signs you'd worry about would be temporary, for road works or accidents, but perhaps these locations could be made electronically available.


Ideally we get to a point where we’re at least publishing signage to a database so that cars don’t have to rely on visual cues both human and automated.

I personally don’t think it’s possible to make automated cars work well without a ton of infrastructure support.


The law doesn't require you to follow a sign that's not there. In fact I'd argue that doing so can be dangerous in some situations(and obviously advised in others). A car should only ever follow what it can actually see, which makes the whole idea almost daft, since computer image recognition is still so poor.


Your car wishes to inform you that there's a sign for McDonald's where they're having a sale 2 for 1 quarter pounder with cheese.


If it's such a busy location to cause a big jam, how did the vandals get away with destroying it? In out of the way places, it won't cause a big jam.


Few places are busy in the middle of the night.


>What's stopping people from destroying yield signs and other safety critical road infrastructure now?

People do destroy yield signs and other safety critical road infrastructure now. I've known more than one person with an appropriated road or safety sign as a home decoration.


People tag signs on the freeway, to the point where caltrans still has razor wire over some of the signs. It's already happening.


Nothing and it happens all the time. This is something that a human driver familiar with the road will notice and it will be unlikely to cause an accident. An out-of-towner might get into an accident. An automated vehicle would ideally not even be looking at signs and just operating from its stored data.


That's an absolutely crazy idea. You are only required by law to follow signs that are actually there, full stop, a car should never rely on some built in database of data. I cannot imagine a database of signs that would be constantly up to date and somehow distributed to all cars on the road.


> I cannot imagine a database of signs that would be constantly up to date and somehow distributed to all cars on the road.

Really depends on where you are. I agree I can't imagine it in the USA, but in the Netherlands, where every square metre is documented, attributed and zoned (yes including the leftover grassy triangle bits between highway ramps, everything). We could totally do that if necessary.

I don't know if documenting all the traffic signs would be the right solution. If anything I would imagine this database to include way more virtual traffic signs than are actually there. Not for the legal traffic rules, but virtual ones that would be nice if they existed and everybody in the flock held to them.

Point is that worldwide there is a huge variety in the quality of roads, the quality of signage, driving culture and attitude, and the general predictability of the environment. Some places will lend themselves more naturally to the first forms self-driving cars will take than other places that are more "free form".


I don't think that's crazy. Why shouldn't local authorities, state DOTs, and the national DOT be obligated to also update a database that self driving cars use? They already have such databases for their own records, usage, and analysis. In a world in which SDCs are normal that is how you would expect it to work.

It's a bigger expectation to suppose that the car will perceive the environment better than a person would and make correct on-the-fly decisions about traffic signs when snow obscures the sign and a little bit of ice and muck obscures its cameras ever-so-slightly, making some of the sensors go half-berserk.


>>It's a bigger expectation to suppose that the car will perceive the environment better than a person would and make correct on-the-fly decisions about traffic signs when snow obscures the sign and a little bit of ice and muck obscures its cameras ever-so-slightly, making some of the sensors go half-berserk.

Then it should do the same a person is required to do in that scenario - slow down and exhibit caution. Otherwise you will get a car that confidently drives forward because it "knows" a sign is there. If the sensors can't cope with that then the car shouldn't be on the road at all, period.

>>Why shouldn't local authorities, state DOTs, and the national DOT be obligated to also update a database that self driving cars use?

Because the same local authorities don't even have the budget, time or competency to fix the most minor issues with our roads. Potholes go unfixed for weeks, there's no budget for cleaning, for salting, for repair of missing signs, lamp posts or simply for review whether existing signage is actually appropriate after changes they make. But yet the same authorities should be tasked with real time updates to some database of signage? I'm sorry, but I'm just trying to be realistic here - we can write legislation to require authorities to do something, but in real world that's just not going to happen reliably enough to trust it. I know I wouldn't.


The obvious counterpoint to "operating from stored data" is an accident or construction causing a lane to be closed, or a child chasing a ball across the road.


In the hypothetical world in which self driving cars are the norm and not the exception detours would be in the system so that SDCs would not need to read signs. Even now a lot of traffic maps get updated to show detours, delays, and temporary closures due to accidents.

FWIW I am a near-term self-driving car skeptic and have been for a long time. I just think that these are not the kind of issues that really pose a major obstacle whereas drunk people wandering across the street are.


We will nano-tag alcohol and keep a database of all wandering drunks, for self-driving cars to either avoid them or take them home!


I’ve seen intersections where people have went offroad in an accident and knocked down stop signs and people know when approaching that it should be there. Would a car?

The only way I can see that working is if there is some kind of geographic location of various stops and the like, but at that point, you need consistent connectivity to obtain that kind of data, right? May work in larger towns and cities, but what about rural areas?


We’ve had offline GPS units for decades. You could easily have a database of every stop sign, and it would fit on an SD card.


The yield signs could be backed up with an an official map all cars read from as well.


And then let that ‘official map’ from Google charge $100/month subscription fee.


You have to actually go out there and physically do the act.

In a wired world the distance to everything is zero. If you can find an 'in', you can carry out this attack in any place in the world, from the comfort of your nerd cave or barracks.


They do that very often. I once drove the wrong way on a one-way street because some troll had changed the arrow's direction.

Fortunately there was almost no traffic.

Signs get vandalized or stolen all the time here in Uruguay. Mostly vandalized.


Still better than humans, humans kill 3000 people using vehicles every day, and most of those aren't even trying


Yeah but if the system is designed in the wrong way, hackers could kill millions in a single moment. Admittedly, this danger already exists as in all modern cars between driver and car there is a layer of (hackable and wireless network connected) software.


Hackers could hijack a missile and kill a lot of people too.

This is why we have security. And honestly modern cars are so computerized already I think your argument applies to them as much as autonomous cars.


“We’re already doomed” isn’t a good reason to not be concerned about new ways to be doomed.

Yes, we have security. Yes, cars can (probably) already be hacked, perhaps even en-mass rather than in targeted ways.

But: one the big advantages of machine learning in cars is that every car can learn from the experiences of every other car. That makes them monocultures. Monocultures are fragile. You find the weakness of one, you find the weakness of all.

I want the benefits of the former without the risks of the latter. I don’t know if that’s even possible.


> I want the benefits of the former without the risks of the latter. I don’t know if that’s even possible.

That's where competition comes in. Just like AMD vs. Intel or OSX vs. Windows. If you find a weakness in one, you don't necessarily find a weakness in the other.

Hence, finding a weakness in Tesla doesn't mean it will work against Waymo or Uber.


To the extent that there is competition, there is an identical loss of opportunity to learn from each other’s mistakes.


I am sorry, what? Which missile system can be hacked and launched remotely? This is a ridiculous claim.

Cars are already getting hacked. The whole point of the post is to highlight how car manufacturers have no concern or competence for security, is will be like the boeing scandal but much worse.


At best, youre conflating two vastly different things - Boeing engineers and car manufacturers.

At worst, youre a troll. Cars are no where being hacked at the scale or magnitude that affected the numbers involved in the Boeing fumble. No where near. Not even remotely close.


I'll bite. Aside from Tesla owners, I do not know anyone in my life today whose vehicle connects to the Internet. Of course, I mean to say I don't see how your comparison to present day is accurate.


>Aside from Tesla owners, I do not know anyone in my life today whose vehicle connects to the Internet

When did the whole OnStar botnet start? I think it's earlier than 2010.


I wasn't aware of that. I suppose that would count as the car, but doesn't most car hacking occur when the component has access to i.e the CAM bus?


Why do you think OnStar doesn't have access to the can bus https://news.ycombinator.com/item?id=10008228


pretty much all cars with modern head units connect to the internet for things like traffic data and software updates.

as of 2017 it is no longer legal to sell a car with no backup camera, and 99.9% of cars implement that with a digitized head unit that, wait for it, connects to the internet.

thus, very new-manufacture cars in 2020 do not connect to the internet, bluetooth, sometimes wifi, etc.



My 2017 Ford connects to wifi.


Come on, you know missiles are locked down with way more security than consumer vehicles.


> Hackers could hijack a missile and kill a lot of people too.

That's a sunken cost fallacy. Only because other things are hackable as well doesn't mean there is no danger.

> And honestly modern cars are so computerized already I think your argument applies to them as much as autonomous cars.

I've said that in my comment above as well, but it was in an edit before you replied so I guess you didn't refresh before submitting the comment.


> That's a sunken cost fallacy. Only because other things are hackable as well doesn't mean there is no danger.

It's really not a sunken cost fallacy. It's good evidence that the "think of the terrorists" angle is overblown. There are plenty of other problems with self-driving cars, but not that one.


And plenty of other problems with missiles before that one too. As the news dramatically reminded us recently, the number one cause of non accidental casualties in plane is being shot down by the military, by a very large margin before terrorism. Similarly it is not hard to guess that car related casualties due to terrorism is never going to be significant compared to others. Terrorism overall is not a big issue, despite we like to think that our main problems are caused by a few malignant individuals.


I think what they meant by "tricking" the cars is to paint roads in a deceiving way to make cars meander into lakes or something like that, not hacking everyone's cars.


It doesn't even have to be malicious. I was driving through Germany recently and some roadworks on a highway meant we were using temporary contraflow lanes, and even with my big fancy human brain, I was sometimes struggling to decide which of the layers of permanent/temporary lane markings I should observe.


Sounds like a great case for computers. They can communicate with each other and all make the same decisions. Even if it's "wrong", it'll be safer if they all make the same wrong decision than a bunch of people coming to different conclusions and demonstrating it only by putting 3,000 pounds of steel where they think it should go.


But cars will recognize that something changed and validate it.


This presumes a car with logical abilities that would negate the need for infrastructure to "meet it halfway."


Like a lot of technologies. Nuclear plants still happened.


There is no evidence that automated driving systems are any safer than human drivers, or that they ever will be.


> or that they ever will be

Eventually being better than humans seems like it's obviously going to happen. Not driving drunk, not getting distracted, and not sleeping at the wheel alone are advantages enough that I would be amazed if the balance doesn't eventually fall in favor of the self-driving car. Self-driving cars don't even have to be particularly good drivers to be better than the average human, given how much the human average is dragged down by recklessness.


While I would love for self-driving cars to be a thing, the AI to detect tired, distracted, drunk or sleeping drivers could be in every car way, way before self-driving cars will be a thing.


> Self-driving cars don't even have to be particularly good drivers to be better than the average human, given how much the human average is dragged down by recklessnes

Tell that to the jury when your self driving car runs into the side of school bus full of kids (which will happen given any reasonable adoption).

Self driving cars would have to be held to an almost insanely high standard to be “winnable lawsuit” proof.


If we (society) took an air traffic safety approach, e.g., analyzing each crash, root causing it, and working to address it, one can imagine a day where self-driving cars are very safe. (I’m sure that day is a long way off still.)


These lawsuits alright happen. Right now.

A car manufacturer can already be sued for things that go wrong, in a car, but the world keeps spinning.


Nope, they already lobbied that car owner is responsible for failures of the self driving system.


Such lawsuits are a net detriment to society. It's why light airplane design remains stuck in the 1960s, and why vaccine companies are protected from lawsuits.


I disagree that it's obvious. Even a mediocre driver can do things like change lanes in heavy traffic. The nut of the problem is a social one. You have to signal your intention and then figure out when someone else is going to let you in. The basic human ability to form a picture that includes the intentions of the other human beings around them is part of the "Strong AI problem" and may never be solved, ever.


From Tesla's latest Vehicle Safety Report:

> In the 4th quarter, we registered one accident for every 3.07 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.10 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.64 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles.

It's pretty unsurprising that at least augmenting human attention and input with machine attention and input reduces accidents. I agree that the cross-over point in time for full automation being safer than human drivers is a total unknown though.


Autopilot tends to be engaged on freeways / motorways where accidents are much lower than driving through town. If you do the stats properly Tesla's autopilot is probably more crash prone than just having a human drive.

There were some stats that if you compare model S deaths to other luxury cars in the same price range they are about 3x higher for the S. Death rates in luxury cars are much lower than the average vehicle.

I think self driving tech will cut roads deaths eventually but it needs more work.


The vast majority of the data I'm able to find on road types covers only fatalities, not accidents... but New York has accident rates per road type (and junctions!) for several years[1], and their rates are very roughly 1 accident per 300k-500k for rural roads, and 1 per 500k-1m for "[fully] controlled access" (which I read as highways, with divided being by far the lowest rate).

So even if it's just on highways, Autopilot is still out-performing humans by 2-3x, if not more.

On average, ignoring better luxury car rates mentioned elsewhere, and that I have no clue if this is representative. I would be surprised if broader data was so much worse that it would reverse the relationship though.

[1] https://www.dot.ny.gov/divisions/operating/osss/highway/acci...


It is not. Or maybe it is. We don't know:

https://www.forbes.com/sites/lanceeliot/2019/06/09/tesla-aut...


Still, there is IMHO a sampling error that makes the comparison not fully accurate.

The pool of "all vehicles" traveling on the road, comprises very old cars and pickup/trucks/vans, and all kind of drivers, including teens (or however inexperienced drivers) and older people that may be more likely to have slower response times or some other condition, like (say) poorer vision.

Besides the fact that Tesla's are at the most a few years old and being on the pricy end (which should imply that they are properly maintained), they are "sport cars" (in the sense that they have very good handling and breaking) and they are driven (I believe) by a certain subset of drivers, relatively young but with no or very few inexperienced drivers.


To play devil's advocate slightly, you could pretty easily argue that self-driving tech is most useful to apply to the lower-safety brackets, which pull down that average.

So even if it's not as good as the healthy-and-wealthy bracket that might be safer, if it's better than the average then it'd be potentially significantly better than the non-healthy-and-wealthy. In that light this seems like a massive win.


Yep, only that the lower-safety bracket cannot afford it, not today nor - presumably - in a near future.

And - to be fair - think "Sabrina" (the movie with Audrey Hepburn), really rich people traditionally had professional drivers (chauffers) which maybe had an even lower rate of accidents.


Those figures are not as good as you seem to suggest.

First of all. Tesla counts the number of miles for every Tesla being involved in an accident. The other figure you quote is for all miles driven by motor vehicles before getting involved in an accident. Given that accidents tend to involve two or more vehicles the number of miles traveled before an accident involving a Tesla without autopilot or safety measures would be closer to 820.000 miles.

In that figure of 479.000 miles commercial traffic is also included. Commercial traffic makes up around 60% of all accidents. We cannot translate this to miles per accident comparable to Tesla, because commercial traffic tends to drive more miles than passenger vehicles do, but there are far less of them, etc. Another big category that needs to be excluded from the general figure is motor cycles, that generous source of donor organs, to make it comparable. Passenger vehicles in general are more safe than the overal figure and thus closer to Tesla's figure.

Second point is that Teslas are hardly part of the second hand or n-hand market yet. It is even a question if Tesla will tracks that data in those markets. In those markets you will see more young people as drivers (something to do with income). They are responsible for a majority of the traffic accidents involving passenger vehicles (something to do with tendencies to discount the future and to overestimate their own capabilities).

Third point is that the really good figure comes from auto pilot, but that only works in places and under conditions that are already far less accident prone like highways under normal weather conditions.

The good news from the figures is that enabling the safety measures make Tesla drivers better drivers: From 1.6 miles to 2.1 miles roughly a 25% increase in miles traveled before an accident. That would lead us in the direction of mandating level-2 automation in all new cars for more safety rather than trying to push for level-5 for some brands.


> First of all. Tesla counts the number of miles for every Tesla being involved in an accident. The other figure you quote is for all miles driven by motor vehicles before getting involved in an accident. Given that accidents tend to involve two or more vehicles the number of miles traveled before an accident involving a Tesla without autopilot or safety measures would be closer to 820.000 miles.

You're doing that the wrong way, aren't you?

If 479k is total miles across an average of two cars, then the single-car equivalent, the number you'd compare to the Tesla numbers, is 240k.

It doesn't really make sense to adjust the Tesla numbers to do the comparison, but if you did you'd be doubling them, not halving them.

> We cannot translate this to miles per accident comparable to Tesla, because commercial traffic tends to drive more miles than passenger vehicles do, but there are far less of them, etc.

Neither of those reasons makes them incomparable.


Just the opposite of GP's opinion, I think basing the statistics solely on Telsa will half the figure rather than doubling it. Suppose the average interval of accident is t and the average speed of car is v, as Telsa cars only represent a small portion of all vehicles, we have

    - Only Telsa: distance_travelled / num_accidents = v t / 1
    - Two cars involved, but counted as one accident : distance_travelled / num_accidents = 2 v t / 1


That's a nice bit of advertising, but; They cheat. When the car detects an unusual or difficult situation developing it prompts the driver to take over. This means that all the really tricky bits of driving are done by the human and all the easy bits are done by the machine..

There was a competing study done against cars with similar price/age/safety equipment that didn't have the auto pilot option and Tesla caused considerably more accidents.


It also lets them blame the driver for their failure. Tesla is more than happy to share the cars data logs to “prove” it wasn’t the fault of Tesla... remember though, according to Tesla marketing the driver is only there because the lawyers say so....


Am I correct in thinking that most people use autopilot in the most boring parts of or most boring drives?

It seems to me that knowing the autopilot isn't 100% perfect is a good reason to not rely on it in more dangerous or complicated scenarios (construction, roads with poor markings, school areas).

I haven't followed too closely here though, just curious if that is a reasonable hypothesis.


This is garbage analysis.

When you are comparing numbers. You compare like-to-like. Tesla specifically prohibits engaging Autopilot in difficult situations and encourages during long boring straight drives.


Humans have been failing to drive safely for longer than we've had computers, and neither has been around for very long.

It's far too soon to claim we've hit peak performance.


There is a lot of evidence that technology tends to improve over time.


Hardware eventual fails, software eventually works.


Bacteria in a sealed container will reproduce until all resources are consumed. Then they drop to anemic levels or die out entirely. Humans live in a relatively closed system. And modern technology sits atop a very fragile foundation. All that to say past evidence does indicate technology generally gets better, but not necessarily forever.


No we don't.

We live on a planet around a midlife star. The amount of resources available to us over the next 10,000 years is staggering.


Humans haven't proven the ability to live sustainably on one planet. Sustainably utilizing resources beyond the planet is far from certain.


For the record, I don't really like this thread, but GP's point does not hinge on using up other planets sustainably...


Autopilot is evidence that automated systems are already better than humans today. The edge cases scare people but on aggregate the stats show is superior to people. This sort of tech is only going to keep getting better.


Look up "Ergodicity".

Autopilot doesn't need to be 2x or 10x better than a human driver -- it needs to be 100x or 1,000x better, or nobody (sane) should touch it.

The problem is not that autopilot is superior than your own driving in the vast majority of the cases! The problem is the 1 in a million times when it makes a trivial mistake that virtually no human would, and kills you instantly. Like driving under an obvious semi-trailer, and shearing your torso off.

An understandable example: You have two games of chance, both with identical 5/6 chance of winning.

One game, you'll play immediately (and as often as possible). The other, you would never play, not even once.

How is that possible? Both have identical 5/6 chances of winning? But, one is "ergotic", meaning that the odds hold over the long term as often as you play, and one is "non-ergotic", meaning that as soon as you lose, you're finished.

One is a single dice roll.

The other is Russian Roulette.

Self-driving cars are Russian Roulette. One "mistake" on the part of the system, and you're dead. The fact that the "stats" prove that it is safer, overall, don't change the fact: you and your family are dead.


People die all the time from driving under a semi-trailer. Look up "Underride". Over 200 people die every year from underride crashes in the US, so this isn't a mistake made only by autopilots. In other words, if we switched to self-driving cars and three people died every week when the autopilot drove into a semi-trailer, that would be a big improvement.


So, your family would be ok that your car sheared you off in a quirk unnecessary accident, so long as the overall stats were better than average?

Non-ergotic stats are not comparable to ergotic (group) stats.


Given 100 individuals who would have died in everyday accidents all would prefer a 99% chance of survival.

Given a population of a million all would prefer a 1 in a million chance of death under a semi to a 1 in 10k chance of being smeared in a more average fashion.

Your argument is the same one used against seat belts with people advocating a strictly inferior survival strategy to prevent a rare misadventure.


Let's say your autodrive system replaces 100,000 driver-hours.

Let's say those driver-hours would have resulted in 3 deaths of others (in vehicles or pedestrians) but your autodrive results in 1 different death (which wouldn't have occurred with the human drivers).

Your stats are no comfort at all to the loved ones of your victim.

It's like the Trolley Problem but the humans are unidentifiable.


The trolley problem challenges people who don't consider choosing not to act equally an active choice or intuitively feel that action is different than inaction.

I would always pick the most valuable to me side and pull the switch or leave it be to bring about the better end result. Given the availability of self driving vehicles that reduce mortality choosing to drive oneself and kill more people is an unethical choice that will be as little comfort to the 3 victims as the one.


  I would always pick the most valuable to me side
Of course, when you know nothing about the would-be victims. But on a pure numbers theoretical basis, you autodrive veers out away from the three killers fleeing police by driving on the wrong side of the road in favor of running down two kids in a crosswalk.

Real life is complicated.


Where did I say I would decide value purely by number of people?

Why did you include the fact that they were killers fleeing police? Are, you supposing that I will somehow infer this in the 2 seconds I have to react or you just positing completele nonsense to muddy the waters with emotions?

Pedanticaly one would virtually never choose to hit pedestrians instead of cars because pedestrians don't have crumple zones or air bags.

The big question is should cars prioritize the owners health or the greater good and If the latter what is the greater good.

I don't think people will turn on a product that might decide to kill them to save a school bus so it's simplified to minimizing chance of fatalities while protecting the owner absolutely even if this risks others.


It's essentially unheard-of that one's own failure to use a seatbelt results in injury or death to some other party hit by the flying driver/passenger who was unbelted. That's the appropriate analogy.


That’s not how it works for humans though. “Other drivers are terrible. I am above average. Other drivers underride. I don’t”


Do you not fly in airplanes? People make tradeoffs like this ALL the time. It's why the term "freak accident" even exists. People trade off freak accidents against convenience and quality of life, let alone a _higher_ rate of non-freak accidents.


You are overcomplicating things. Every minute of drive time is Russian roulette with on average very low chances of losing.

The only thing that matters is the overall odds there is no inherent mathematical difference between the 2 scenarios.

There is no difference to prefer more likely death in common scenario to less likely death in a stupider scenario.

If you believe otherwise you ought to explicate with numbers.


That doesn't sound like a proper set of assumptions - that human accidents will be fender benders while self driving will be chunk-salsa worthy. The cumulative fatality risk applies to both parties. Humans will die in fatal accidents that no Self Driving car would do.

The 1000x better demand is more a stupid human illusion of control thing like fearing flying more than a road trip of equal distance.


[flagged]


Please don't let downvotes get to you like this. I know it's not always easy, but the site guidelines ask it explicitly of everyone. That's because it quickly causes comments to sink below the acceptable levels, which is what happened here.

https://news.ycombinator.com/newsguidelines.html


Thanks for all you do, dang!

However, unless HN implements K-Means clustering, a single-axis downvote mechanism will inevitably lead to echo-chamber bullying, I believe.

Especially if no-one fights back, encouraging the people who vote “sensible” vs “nonsense”, and discouraging the “like” vs “don’t like” voters who ruin debate.


In my experience, everyone is a like vs. don't-like voter.


Autopilot is at most evidence that human-machine systems are better than just human drivers. Throughout its history "autopilot" had many cases where the human saved its ass and quite a few where it managed to kill the human anyway.


Good luck getting the system to even that level of proficiency. Not impossible, but people are actually very good drivers (that still kill truckloads of people).


Now imagine every car equipped to 'self-drive'. No manually driven cars anymore anywhere. No old timers, no exceptions. What would the daily dead rate be? 30k? 30m? Somewhere in between? Lower?

It's easy to say they are better, when there are so few of them. (Compared to the amount of human driven cars)


Configurable! A reasonable amount of death. Not to much but enough to still allow for fast traffic. This kind of calc is already done for car safety features. It probably is as ridiculous as it sounds but who am I to judge.


Ascribe it to my typical german worryguts.

DELTA DASH NOVEMBER OSCAR NOVEMBER OSCAR


While "autonomous vehicles" are the biggest practical joke of the 2010's, this artists joke trapping an "autonomous" car in a circle of salt like a medieval demon was pretty good:

https://imgur.com/gallery/24jLd


> I doubt, that the software / hardware will have a common sense, like we humans have

Aren't sanity checks basically common sense? I would suspect they already play a major role in making autonomous driving a reality.


I'd argue those are the opposite - due to the software/hardware failing to have common sense, we've added some extremely coarse checks to shut it down when it's clearly going nuts. Less "common sense", more "don't push this button, it will kill you [presses button anyway]"


Common sense is not just a large number of sanity checks, it also includes the ability to solve some of the problems identified by failed sanity checks. For example, common sense might lead you to break the road rules in an emergency, or drive a car with a safety problem for a short distance instead of getting it towed.


Imagine something like a dangerous situation -- there is an oncoming fire or flood, and you need to get out of there, but your self-driving car decides to do a precautionary stop because one of the sensors stopped working. People are totally underestimating the intelligence required to make these types of decisions. Surrendering your mobility to a program is a much bigger hurdle than people think, and the bar is much higher than being able to successfully avoid collisions.


More then solving problems, common sense involves identifying a problem as a problem. Unknown unknowns.


I would argue that signing beacon messages with a PKI infrastructure would migate that issue.


Who is the signing authority? China? The United States? Russia?


Probably whatever authority is responsible for the sign installation/road maintenance.


Makes sense assuming that authority doesn't leak their private keys like a sieve.


Provided they have a way to easily rotate keys via access to the physical hardware it's probably fine. If a local government loses a signing key they would have to register a new one and send a work crew to swap a micro SD card in every traffic light or whatever, but at least that's all they would have to do. In the meantime all an adversary can do with their signing key is install fake beacons, but that requires actually fabricating beacons, and putting themselves at risk by physically installing them in a place they will quickly be noticed, so the potential for damage is pretty limited. Much like how people can already steal stop signs or put up their own speed limit signs, in practice I don't think this will be more than a nuisance.


> If a local government loses a signing key they would have to...

They'd have to know they were compromised first. I think that is a big 'if', and creation of a centralized repository of backdoor keys makes for tempting targets.


Also, a stake in the ground on the edge of the road that says, I'm 15 feet to the right of the center lane will probably pay for itself in a couple years of stripping. You put two in the ground that say, I'm 15 feet to the right of the centerline and 35 feet from the next beacon and you never have to paint a centerline there again.


But all you've done is swapped one thing that will be poorly maintained for one thing that's already poorly maintained.


*with a much longer life. Stripping needs to be done every year to stay fresh. You can put a beacon in the ground and I should be able to be there and working 10 years later.


Those beacons can be DoSed. Consider how Hong Kong protestors avoid the face recognition with laser pens.


Yeah, but couldn't you also simply disconnect the beacons with a wirecutter?

I mean, every technical system can be shutdown and will shutdown sometimes, isn't that a inherent weakness of such a system?

if that beacon does not respond disable automation for that part?


Sure, but I was thinking about temporary disabling (like a temporary, reversible DoS). If you can temporarily disable something, it is more difficult to find the culprit, and the impact is higher because of the surprise element. If something is permanently disabled, you fix it right away.

Just like bullying: bullying itself is annoying, but what I found annoying is the inconsistency of it. At some point, my bullies were friendly, at another point once more bullying. If it were permanently on-going and not sneaky, they'd be found out by e.g. teachers long ago already.


Ahh, yes that would be a problem.

On the other hand, wouldn't a ddos be visible in the logs?

Logs should be checked centrally anyway, so a 30 second flood of messages should be visible.

But in the end, you are right that it will be hard to do this the right way.


Trolls can also drop bricks on cars from overpasses, and I suspect fucking with road signs will rapidly carry the same taboo and punishment


You can already screw with infrastructure to kill people. Sometimes we just have to hope that the combination of positives and negatives convince people not to go out and screw the world.


You might want to consider https://xkcd.com/1958/.


Right, the main question is can one murderer murder lots of people at once with new technology. Can they buy a machine gun, or reprogram every car to crash, or lobby the government to force doctors to prescribe the opiates they sell.


Even current tech can easily drive a car on a road completely without digital beacons.

We won't lose that capacity.


Try up north during the winter...


It would actually be easier to make smart infrastructure tamper proof relative to existing infrastructure.


You realize trolls can wreak havoc with traffic today if they wanted to?


There is an ia that we try to learn it common sense.


This is definitely one of the things xkcd has talked about:

https://xkcd.com/1958/


We'd have self-driving road marker printers to keep the paint from becoming invisible ;)


The Road Roomba 9000!


Volvo has proposed some rather simple infrastructure - driving magnetized nails into the pavement as a lane hint. This is mostly because they have to deal with heavy snow. It's not the sole, or even the primary guidance system, but it helps. They also suggest that snowplows could use the magnets for guidance. In snowy areas, it's common to use posts or flags or even overhead signs (in Hokkaido) to show where the road is.


Possibly something between lifts of asphalt would work. But a nail through the surface layer of asphalt would allow water in, and mess with the structural integrity of the wear surface. Also, the water it allowed in would freeze and expand and likely push the "nail" upwards - leading to punctured tires. I'd be curious to read a bit more if you have a link.

The overhead lane markers in Hokkaido work well, but would be an expensive retrofit elsewhere (not the poles themselves, but the foundations)


It's a cool idea, but nails rust, meaning a maintenance issue and outages. Is there a magnetic material that doesn't corrode?


The signal doesn't have to be strong, only detectable.

The nails aren't structural, only a signal.

Multiple nails mean the signal is highly redundant, depending on density.

Regular maintenance could detect weakening signal (rusting nails) and result in new nails being inserted.

Iron oxide remains (weakly) magnetic.

Treatments (galvanised nails, coated, etc.) could slow corrosion.


How about magnetic hematite mixed into the road building materials; especially as its a biproduct of iron mining...


Good point, such an issue could/would become obvious during a multi year trial.


Yeah. For the cost of all this retrofitting of roads, you’d be able to build the most insanely comprehensive transit system anyone has ever heard of.

FWIW though, it probably is possible to use this sort of technology for buses and shuttles. You can’t fix markings on ALL the roads, but you can make sure the markings are good enough on main arterials to have a dedicated bus lane.


For the love of everything holy, some of us just dont want public transit - especially having seen the horrors of most public transit options even in the best managed nations and conditions.

From one of my previous comments on a different post: https://news.ycombinator.com/item?id=19638171

- - - -

Reducing the number of cars (and therefore traffic) on the roads will benefit everybody You seem to have a rose-tinted view of the world we live in.

  Have you ever had to commute in less-than-ideal 
  conditions?
  Heavy snow? Sleet? Black ice?
  Have you ever lived in places that are not
  perfectly flat? or lived in places that 
  are hot that make bicycling unfeasible?
  Did you have sporting gear / work gear that
  you had to lug? Did you know some people have
  to fetch their own gear to work
  Did you have to take calls during transit?
  Did you know its common practice for employees
  to call into meetings during their commute and/or
  help assist operations via conference calls?
  Have you had to shop for more than a baguette
  or a bagel at a store? You know how cumbersome
  that gets for even a family of three?
  Do you have the slightest clue how much casual 
  violence and crime happen on public transit?[1]
 
Not to belabor the point but there simply are dozens of cases where bicycles or public transit just don't cut it. Not to mention the hygiene, personal safety (from other passengers for example) and personal space aspects involved in someone choosing a mode of transportation other than public transit or bicycling. Ride-sharing, autonomous vehicles and emission-free vehicles should all alleviate the issues we currently face with traffic, parking and accidents.

However doing away with cars or vehicular traffic is just pollyannaish madness.

[1]

Teen robbed at gunpoint at Fruitvale, BART officer says writing a report is a 'waste of resources'

https://www.sfgate.com/local/article/bart-police-refuse-repo...

- - - -

edit: formatting


> Ride-sharing, autonomous vehicles and emission-free vehicles should all alleviate the issues we currently face with traffic, parking and accidents.

No they won’t. This is just willful self-delusion on the part of transit haters.

Most of your concerns are either just plain petty (like, just wear headphones if you don’t like hearing other people) or lacking in perspective (cars kill way more people than whatever safety concerns on a transit system you want to gussy up).


A lot of tech solutions to transportation problems are reinventing elements of mass transit. Self-driving trucks that are virtually linked like a train. Telling Uber pool rides to walk to the nearest intersection to optimize routes (like a bus stop). Central scheduling to share right of way timeslots between vehicles. Or tech fails to reinvent the wheel, e.g. assuming that the congestion and parking issues caused by too many single occupancy vehicles can be solved with software instead of dynamic road and parking pricing and high occupancy vehicles.


>A lot of tech solutions to transportation problems are reinventing elements of mass transit.

For sure. And I feel like they're going about it backwards by defaulting to maintaining norms of private property and leasing things starting with private/personally owned property and developing ways to lease it to for communal use.

They'd probably have a more viable business model if they had started with communal property and charged rent or usage fees for maintenance instead. Jitney cabs or dollar vans have been around forever, and their big challenge was figuring out how to do dispatch and routing in a way that didn't give people intolerably long wait times.


The impetus will be wartime, just as the impetus for the interstate highway system and most of the railway system was wartime.

In peacetime, self-driving cars are a nice-to-have that can save many billions of driver hours and a few traffic fatalities. In wartime, they're literally a matter of life or death. The side that can handle all of their logistics without exposing their precious humans to enemy fire has a huge advantage over the one whose supply lanes can get picked off one by one.


This argument is tough to swallow. The real push to build interstate highways was in early 1950s and began in earnest in 1956. National defense was only one piece of the justification for the project.


The origins of the interstate highway system in defense is well-documented in the historical record, and the title of the bill authorizing it was the "National Interstate and Defense Highways Act":

https://en.wikipedia.org/wiki/Federal_Aid_Highway_Act_of_195...

There were of course other civilian concerns as well, but Eisenhower's arguments at the time of the plan primarily cited the importance of Germany's autobahn system during WW2 as a justification.


Dropping bombs on smart highways seems much easier than hitting a moving vehicle.


Relatively futile, though, when a self-driving cement mixer comes along, patches the crater, and restores any guidechips needed for other vehicles to navigate.


If a road is too dangerous for humans to drive on due to enemy fire, I would expect the conditions of the road to be much worse than American roads today.


Paint is a wear item, the example of beacons would not be a wear item. If original commenter is correct that infrastructure meets halfway, I wouldn't expect self driving cars transition to happen fast at all - maybe on highways first, and busy city streets, later secondary roads. But that kind of thing would result in a decades long transition, and I would estimate it as never being fully autonomous (if beacons die, car would need manual intervention)


If there is one thing we can be sure of, looking at the past couple decades of wireless tech, it's that there will be a profusion of mutually incompatible standards, each spanning multiple HW generations. It may not experience the same physical wear, but it will be an expensive maintenance issue nonetheless.


Definitely. If it was clear that external beacons were necessary, you would have the Tesla beacons, you'd have the European manufacturer associated beacons, you'd have the Japanese manufacturers associated beacons... installed at manufacturer's cost (or as a partnership), leading to duplication some places, total absence in others, and exclusivity in the rest. And the EU would do better than the rest of the world, having cars with different receivers (manufactured only for the EU market, of course)


"Paint is a wear item, the example of beacons would not be a wear item"

I am extremely skeptical that beacons, sitting out in the heat, and the sun, and the cold, and the rain, and exposed to whatever we use to maintain roads at that juncture, will not be "a wear item".


They'll get hit and sheared off by plows and over-size loads, or just by drunks or idiots still driving dumb cars.

I do not think people have a solid grasp of how many millions of miles of road would need to be technified for self-driving cars to be widely viable.

Most likely, it's going to be a niche thing, tightly restricted to certain places that can afford to build and maintain the infrastructure.


I, for one, already fail to grasp where we manage to get the funds required to clear and level the land and butter a fresh layer of asphalt on top of it regularly enough to build and maintain that many roads to begin with.


Don't know about the US, but most roads in Europe already have coded reflectors to aid identification of lane geometry. Roadside reflectors have become a standard feature even on small rural roads, they're just hollow plastic half-tubes with reflecting strips on them. Cheap to install, cheap to maintain.

https://duckduckgo.com/?q=road+reflectors&t=ffab&ia=images&i...


Sure, but in the sense that road signs or traffic lights are a wear item. They wouldn't be exposed to the real causes of damage to roads (at least in my area) - transport trucks and road plows. They would need to be replaced at 5-30 year intervals, rather than 4-12 month intervals.


Traffic lights seem to have a pretty long lifespan.


The primary deployment potential for self-driving cars is long haul freight, which could be done exclusively via the interstate system.

If we're going to beacon up a road, they easily make the most sense.


> which could be done exclusively via the interstate system.

The interstate system already has working self-driving, even without any beacons. Inherently-safe roads like interstate freeways are already almost solved, and you can already buy a Chevrolet today that can self-drive the interstate system 99% of the time.

It's all the dangerous roads that need the help. The ones with bicyclists and pedestrians and parked cars and uncontrolled access and such.


this might be true for the US, but in many parts of europe, roads are far more crowded. not to mention this doesn't solve the last mile problem.


Pavement is a wear item.


Yes, the top lift of pavement is a wear item in the decade time frame. Paint is a wear item in the sub-one year time frame (where I live, with low-VOC paint and frequent winter plowing)

Curbs, signage, lights, poles, drainage infrastructure, lower lifts of asphalt and granular material, ...are all non-wear items. This doesn't mean they never need to be repaired, but rather that they tend to get replaced after unintentional damage. Off-surface beacons (what the original comment suggested) would be in this category.


All it takes is a little bit of rain and hundreds of traffic lights go haywire in my city. What makes you think those beacons would fare any better? Politicians don't a crap for long term maintanance of little visibility infrastructure like that, so they're poorly funded.

And you say "private contactors will save the day", please, stop drinking the Ayn Rand Cool-aid.


What is wrong with the traffic lights in your city would be the much more interesting question here. Everywhere I know, traffic lights work as long as their power supply is intact


Shitty maintenance on top of lower bidder supplying sub-par equipment because half of the money went to corruption. The result is water getting in the circuits and shorting them out.


Alright. I work with roads and traffic infrastructure, and have never seen that situation before. The active equipment (in my experience) is all stored in above ground cabinets - just simple grey metal cabinets, elevated on foundations or poles, sealed with a gasket but with vent holes to prevent heat buildup. It would take almost deliberate misconstruction to cause a short.


This doesn't answer your question about money, but with (mostly) human drivers on the road there isn't much incentive to keep the lines visible, because humans are excellent at inferring where the lines should be, even if they're not there.


The counter-example to this is train tracks. (There are minimum standards for roads, too.)


Self driving trains would probably be easier. Aren't there already some airport shuttle trains and metro transit that's fully automated?

https://en.m.wikipedia.org/wiki/List_of_automated_train_syst...


SkyTrain in Vancouver, BC for example is fully automated. It works well unless there are issues with track or sensors, at which point it has to be driven manually.


The SkyTrain is a great example of how easily transportation technology infrastructure fails during even slightly unexpected circumstances. The trains were inoperable for multiple times this past week, for hours at a time due to snow storms.

Trains aren't exactly new technology, yet here we are with them still not functioning properly during what was actually a pretty routine snow storm. It doesn't leave me very optimistic for self driving cars, which will be significantly more compliced technology.


I agree with your point, although I don't think "slightly unexpected" is fair in this case. For non-locals, Vancouver and the region was hit with pretty heavy snowfall, ice and wind that week.

Sources

Footage of conditions in the city: https://www.youtube.com/watch?v=Ek8dN9AoGbo

Hydro woes and 150km/h winds in the region: https://vancouversun.com/news/local-news/snow-day-highway-al...

Transit status snapshot: https://www.vancourier.com/news/metro-vancouver-snow-results...

School closures, Blizzard warning: https://vancouversun.com/news/local-news/snow-day-highway-al...

Interviews about the ice buildup: https://www.youtube.com/watch?v=FWgx99k585s

Finally, a bit of fun; how Canadians deal with weather: https://bc.ctvnews.ca/video-shows-people-skiing-snowboarding...


Docklands Light Railway works quite well with 45 stations. Humans can take over if needed. Running since 1987 so fairly old tech.


A DLR train has a trained operator, they just aren't (usually) driving. A few places have trains with nobody trained onboard at all. In the event of an emergency remote operators can talk to the passengers e.g. to explain that somebody will come to help or that it has been made safe for them to evacuate (automated trains are invariably electric so it may not be safe to just wander off the train outside a station).


FWIW, self driving trains are a very different problem. With self driving cars, the approach has been to use sensors on the vehicle and make autonomous decisions based on that input. This will never work for trains, because their stopping distances are so long that they cannot use a sensor to detect most oncoming issues.

Self driving trains will require more coordination, with central dispatch to tell them when/where to go. That leaves the intelligence on the train much more basic.


So how do drivers stop with long stopping distance?

They have signals to tell them “stop”, they have speed limits posted, they have route knowledge. All detectable by computers, even without changes. Fast trains have in cab signalling in any case.

They have no decision about where to go (signal box sets the points/switch track), they have no decision on when to go (signal goes green)

The only bit which may need human input is the “ok to depart” notification when all the doors are clear at a station.


Except you can always install sensors that can look as far as you want


Do you mean on the tracks?

There are hard limitations on sensing onboard the train. Tracks that bend around hills, etc. I have some exposure to wireless communication for trains, and one of the limitations is that you can often not get a line of site even from one end of the train to another.


Not OP, but yes, along the right of way.

Since railroad tracks rarely get up and go walkabout, you can instrument them (conduction, video, lidar/radar) to the limits of your preferences and/or budget. Information can be relayed to both central control and individual trainsets.


Is self-driving the same as fully-automated? I think not.

Self-driving implies intelligence, and fully-automated trains simply follow rote rules, and apply the emergency brakes if something unexpected happens. Not intelligent.

Put another way, self-driving has unbounded complexity, while a fixed number of vehicles on a protected, grade-separated railway is not very complex at all.


Now that you mention it, I wonder if tracks would actually be a better solution for most use cases. I could see a situation where highways have tracks and you only have to actually drive at the beginning and end of the journey.


Never mind that money for basic road maintenance is diverted because people want to pay less tax/rates and vote for politicians who promise this bugger the long term effect.


To give access to a very lucrative business endeavor? Someone would come up with the investment. We put a nice canape of cell towers too, or wired up every home for internet, after doing it for cable TV, after doing it for electricity. No one really frowns much at those economics anymore (in big parts of the world at least).


Those who think FSD can be solved by building special road infrastructure for self driving cars misunderstand the problem.

Cars are already equipped with suites of sensors giving them far more complete information than any human can process. Cars can already react faster and hold lanes with more precision than humans can.

What’s lacking is general intelligence. The ability to creatively respond to unexpected situations, even when it’s something you’ve never seen before.


And those unexpected situations are not edge cases either. Virtually every time you drive you’ll encounter a novel case that has never been seen before. Your human brain is good at improvising. Computers? Not so much...


My average drive distance is under 10 miles. The expected number of miles for Waymo to disengage once is over 3 orders of magnitude higher. I'd have to take 1000 trips before Waymo would have disengaged.

That does not jive with your numbers (1 disengage per trip, roughly 10 miles), unless you're saying that Waymo's numbers are juiced by few city miles?


Last I checked waymo drives a very prescriptive route and doesn’t go over like 35mph or something. It’s also not running at scale, meaning a non trivial amount of them on the road.

Call me back when waymo can drive in all seasons in random places anywhere in the US.


Sure sure, maybe we have FSD only in like SoCal+TX+AZ+AL+FL. That, on its own, is a game changer.

Doesn't have to solve all problems, only sufficiently large problems.


Gentle correction: 'jive' -> 'jibe'


Oooh, thanks for that!


It's also a business endeavour that's done a lot of bragging about the number of people it will render jobless. It's not going to be an easy political sell to erect lots of additional road furniture unless it's cheap and unobtrusive.


All of these provided access to new technologies.

Autonomous driving is a marginal improvement on an already deployed technology.


It’s hardly a marginal improvement. If FSD were to become a reality, it would offer incredible benefits to economic productivity, safety, and quality of life.


Privatized roads?


Does it have to be visible? Couldn't it be tokens transmitting their location with great accuracy?

0 visibility in snow happens often.


The Advanced Snow Plow system that Caltrans used as a test on Interstate 80 near Donner Summit had four miles with embedded magnets. This cost about $25,000/mile in '98.

The full description of the system can be read at https://path.berkeley.edu/sites/default/files/advanced_snowp...

Alaska uses GPS for their precision plowing - https://www.truckinginfo.com/329914/how-alaska-dot-uses-gps-... . Note that to get the 2" precision you need high quality GPS receivers.

> The trucks have two GPS receivers mounted atop the cab. These receivers cost about $10,000 each, Shankwitz says. "That's probably why this hasn't been deployed in many other areas; it's just too expensive and most applications do not require that level of accuracy."

> The two-centimeter accuracy actually comes from a third receiver -- a high-precision, stationary ground-based receiver perched atop a microwave communications tower in nearby Valdez. It's accurate to within millimeters and it acts as reference receiver for the plow-mounted systems.

I don't see this being standard on self driving cars.

That said, ground penetrating radar is being looked at and appears to be a lower price point. https://phys.org/news/2016-06-vehicles-high-precision-advers...


It's called differential GPS. However the new block of GPS sats have higher accuracy without the need for the (multiple) expensive receivers.

However, I'd argue the lanekeeping and "where am I" problems this stuff solves is dwarfed by the common sense and logical reasoning & recognition problems.


I don't see why not. We have centimeter-level GPS. It's true you maybe can't always rely on GPS signal being there, but you could also install devices on traffic lights that would allow cars to precisely locate themselves at intersections, so they stop at the correct position, etc.

What I think is that self-driving cars may also force us to confront ways in which real-world driving environments are inadequate, so that we can make them more adequate. For example, there are intersections where stop signs (or other signs) are present but not visible. Humans know they have to stop there, so they stop anyway. Self-driving cars could systematically find and report these locations, and might get the city to do something about them.


I also picture the moment a lot of cube satellites are above broadcasting internet across the globe 24hrs a day combined with millimeter gps combined with 5g cellular technology we could do a reasonable job of driving cars anywhere.


Actually, if all we factor in is paint then if we would have autonomous vehicles for which 'paint' was critical there would be a higher frequency in the maintenance cycle using automated robot painters. I'm fairly sure if the highways facilitated 100% autonomous operation their maintenance could be automated thus driving down the costs.

But you are right, road maintenance is never cheap.

It would probably be easier to have a swarm like national airtraffic AI, all cars flying, all cargo in blimps than to maintain a complex ground network of "smart roads"


That's a tragedy of priorities. Priorities of the local municipality or even the federal government. A dysfunction of misplaced deliberations.

All the road and infrastructure taxes being funneled by the politicians to, other areas as they see fit to placate their constituencies or finance their pet projects.

Same with out-of-control grossly over-budget projects that dont deliver the bang for the buck.

If we - as an electorate - insisted on superior paint or "marking technology" for surface roads, we shall have them in one form or the other.


Do self-driving cars even need road markings? IIRC Google's self-driving car project started out ignoring road markings entirely, and relied on a centimetre-precision map of the test city instead.

That obviously has downsides too, but unreliable road markings would be a pretty silly blocker to ever having a self-driving car. that's a solveable short-term problem.


all fun and games until some jack-ass spoofs the GPS and moves a road 20 feet to the left.

https://www.wired.com/2012/07/drone-hijacking/


People in this thread are way too hung up on the "what if cars get attacked" problem. Just because you can't solve every problem, doesn't mean you can't have a usable product.

Cars today don't have any defense against people dropping bricks or pouring paint off an overpass, but somehow the system still works.


They do actually, the humans driving the car can respond to a threat that an autonomous car would have to be programmed to avoid


I'm not sure humans have the reaction time to avoid an object thrown from an overpass.

On the contrary, over the years I've read on many occasions how drivers and passengers have been seriously injured or killed by the morons who get a kick out of dropping objects from an overpass.

https://www.tennessean.com/story/news/local/wilson/mt-juliet...


One brick can attack one car.

One bricked GPS system can attack an entire country. Or planet.


True, but humans can see a bunch of kids loitering on top of a bridge, get suspicious and change lane or slow down as a precaution


That's the point though, the Waymo car doesn't rely on gps, it's has HD maps on board. Plus is has dead reckoning because it's touching the ground, so it knows where it's moving.


Not sure I get your point.

Having atom-resolution maps doesn't help a bit it you don't know where on it you are.


Can't spoof gps all the time everywhere. Once you have a general idea where you are (usually because that's where you were when you stopped yesterday), you can compare what you can see with hd maps, and fix your position exactly. You know you're not in a building, but on a drivable surface for a start.

My 2006 car was fine driving underground for 10 mins with no gps signal, but got confused when I drove it onto a train and it moved 30 miles without the wheels turning. Then after a few minutes got a gps signal again and fixed itself. Had an option to manually set the position and heading too.


That isn't true. There were car navigation devices that predated non-military GPS. They used inertial sensors to understand the car's acceleration and turning, and used on-board maps to correct for the inevitable drift. The car made a 90 degree turn onto a side street but the map indicated that road was 30 feet ahead, it would reset its position to be 30 feet from where it thought it was.


I think the idea is that with a good idea of where you're starting, either by GPS or some other waypoints, you can avoid the need for GPS through dead reckoning and updating based on other known features. At least, that's my guess.


The first navigation devices didn't rely on GPS. I wouldn't be suprised if Waymo reintroduced this.


If you have high res images available of the vicinity ( and constantly update them ) you can work out where you are.


it won't happen overnight everywhere. there are other transportation methods that require lots of infrastructure to maintain, e.g. trams, subways where the need is big enough to worth it. I guess we will see the same with roads, busy avenues will be smart, and small country roads will require manual driving.


We can on motorways, which is where I think driverless technology will actually start - motorways are simpler environments than most roads, and have convenient service stations along them.

Driverless lorries that go from one service station to another will be the first fully autonomous vehicles I think.


In dense urban and suburban areas that's likely not an issue.

Another possibility is Automaker X partnering with comms / infrastructure Company Y (e.g., Comcast). Put another way, if they can get to the point this is the dealbreaker then it's easy compared to what it took to get here.


Don't get me started on the potholes. There are some traffic lights near me which need patching every month or two in the winter. Can they not just engineer it to lift in a new concrete slab each time?


If you're going to try to swear like a 60's Brit, at least get you bloody grammar right. ;-) <strikethrough> "bloody" is an adjective, not an adverb, so </strikethrough>:

"We can't even keep the bloody yellow markings on the road visible"



Hmm, you're absolutely right! (And the dictionary I consulted is as wrong as I was.) So why does the post I was replying to not work as is?


The "even" both breaks up the flow and makes the "bloody" unnecessary - i.e. you've already said "even", you don't need to say "bloody" too.

So although there's nothing grammatically wrong with the sentence, it reads awkwardly. A native British English speaker would be more likely to use your suggestion. It's just one of those barely documented curiosities like adjective order (https://www.theguardian.com/commentisfree/2016/sep/13/senten...).


Oh, that's brilliant! I'd seen this described before, but not distilled down to that specific order. Thank you!


It does bloody work as is. People bloody say "bloody verb" all the bloody time.

Here's a bloody example: https://www.youtube.com/watch?v=OGWhjojt5dw


Probably just because you're not familiar with the usage.


I expect private companies will be the ones providing the infrastructure for self driving cars, and your car will have to subscribe to a "awareness service provider", much the same way that your phone needs to subscribe to a carrier.


Same as usual for the past few decades: private capital that funds existing and upcoming startups, who will deploy proprietary smart infrastructure and allow users to connect to it probably through some monthly subscription service.


If subsurface radar imaging tech works out, it will fix the marking/signage problem nicely. Kind of disappointing not to hear more about that since the initial articles came out a couple of years ago.


Perhaps take 10% of the military budget and you might just have some $ for that


Probably in those places in the world where "we" can keep the markings on the road visible and the roads properly maintained?


The money is going to come from ads and VCs


It seems to me 99.99% of all road markings are perfectly visible


I'm incredulous that that can be the case anywhere.

I live in a wealthy jurisdiction, and the road markings here largely are not what I'd consider to be "perfectly visible". I'd guess that currently, probably 3/4 of painted lane markings have one or more of the following issues: a) obscured by snow/ice, b) faded c) road under construction and lane markings don't correspond to current lanes.


This just goes to show how even unperceptive humans are better than the most perceptive car logic.

Where the car would freak out and stop, you don't even notice anything is missing.


If you ever visit a city that gets substantial snowfall, road markings are essentially nonexistent, even when the snow and ice melts in the spring.


Assuming you're right in that estimate, the hard work is getting more 9s. As a total kind of spitball number, you want 4 more of them.


If it is on asphalt roads you can easily embed wires and electronics underneath the surface by heating it up. And an embedded wire would be both easy for a computer to track and easy to fix or replace.


We may actually discover it’s cheaper or smarter to limit the scope of what self driving cars have to do by limiting their infrastructure. If cities become more pedestrian (cyclist, public transport) centric then some of the issues disappear. It would be a hell of a lot easier if you eliminate the complexities of navigating city traffic if most of it is gone or car traffic is completely separated from the rest. Either via a reduced number of tunnels, suspended roadway, or simply by reducing the number of places where cars and pedestrians cross paths.

Between vastly reducing car traffic and separating it from pedestrians the problem of fully self driving is greatly simplified (probably not eliminated).


Where will existing cities find the budget and physical space for separating transportation modes?


If you do it in the correct order, first massively reducing car traffic and then separating it, the costs should be manageable over time. The point isn’t to do it all at once or overnight. This has the “side” effect of making cities a lot more human friendly even before doing anything to separate traffic while still reducing the complexity of navigating the city FSD.


Re-purposing one of the existing lanes (a bus lane can move more people than 3 SOV lanes combined). Toll roads, reinvesting parking revenue, and increased tax revenue from downtown land used for working, living, and playing instead of just storing automobiles.


Which proves the point the volkswagen guy was trying to make I guess.


To put this in perspective, imagine this hypothetical conversation happening a century ago:

"I have this new auto-mobile concept which doesn't require a horse and which can go fast; I envision up to fifty miles per hour. It will require a smooth, hardened road surface, but that will be achievable someday."

"Forget about it. We already have millions of miles of roads, which are bumpy, made of dirt, and hard to build and maintain. Who will do the extra work to smooth them? Harden them? Maintain them? What if some jerk digs a hole in one as a prank? Maybe this could happen in a limited way in cities, but this is overall a pipe dream."

My point is, it isn't a question of whether this is feasible, it's a question of incentives. If the incentives lead society in this direction, it will happen.


Of course cars did happen and worked with the crappy roads of the time. Self driving cars will probably have to do the same - work with existing crappy roads including dirt tracks and the like.


Well. Old cars - and new ones - can run on the old roads. It’s just as comfortable as a caleche. Changes were way more incremental.


You can compare these levels of progress, sure but that doesn't mean the comparison is valid. Lots of people dismissed flying cars in the 60s and heck, they were right, the problem is fundamentally different.

The difference because the horse-to-auto transition and the auto-to-self-driving-auto is that autonomous cars solutions are inherently fragile and cars reasonably robust. (Flying cars are inherently fragile too - the problem isn't get a car-like-thing to fly, it's getting it to not hit things).


Cobblestone/bumpy/muddy/dirt streets still exist today.

I could see autonomous vehicles working in a city, but not out in the country where a driveway might be an unmarked dirt path.


Yes but these things are already problems for human-operated cars. You just need to drive slow and be careful on some roads. You need training, instructions, markers, specialized vehicles, or prior experience on some patches of terrain. These solutions have analogs in the automated driving realm.


What features of an unmarked dirt path make it "driveable" that you can pick out but a camera cannot?


What are the incentives, though? There’s not actually that much money to be made here, despite dreams of self driving taxis (something that could never be possible). Maybe in the trucking industry, but the tech doesn’t exist and is nowhere close to being reliable enough consistently enough.


Many many many incentivrs... Cars that can drive at double the speed and much closer together, meaning a much higher throughput. Productivity while commuting. Ability to serve content and ads to people while commuting. Virtual public transport through large automated vehicles. Much cheaper, full time delivery of goods. Lower death rates. Ability to live in the suburbs and commute 4 hours a day without wanting to shoot yourself into the head. Ability to punch in a destination across the country, then sleep in your car while it drives all night. the list goes on and on


People could more safely play on their phones while 'driving'.


I think the key point is the amount of energy and resources necessary to make self driving cars a reality.

With the ongoing climate change and projected energy crisis, humanity may not have the physical resources to build and maintain self driving cars.

If somehow the energy problem is solved and climate change does not bring chaos, I can see true self driving car before 2100.


Yes, imagine that world. We wouldn’t have a million direct deaths every year, a million more caused by pollution. We wouldn’t waste trillions a year on roads, hospitals, signs, sprawl, and so much more. Just so someone can get a donut at 3am.


I think self-driving cars are like fusion reactors, they're perfectly possible on paper and, like, totally 15years away from now.

I'm sure it'll be possible some day, but I'm not positive that it'll be possible in the general case before we reach the Singularity. And when we get there self-driving cars are going to be a small side-effect of this unprecedented revolution.

I can't help to draw a parallel between these threads about self-driving cars where many people are saying that it's basically a done deal and we just need to wait a few years, and that thread I read a few hours ago about the Boing 737 Max re-certification being delayed once more. I know it's a bit of a fallacy to treat HN has a singular entity but when I read the threads about the 737 the consensus in here is that "it's a death trap and I'll never fly one of these planes ever again" but at the same time we're totally optimistic that the industry will have perfected self-driving technology in our lifetimes? The industry has been cutting corners on planes that cost a fortune and didn't manage to make them safe to operate in unobstructed airspace because of a minor sensor dysfunction but they're totally gonna nail the incredibly complex task of operating a 1+ton vehicle at highway speeds in much less controlled environment?

I can totally see an ever-increasing amount of driver assistance in the future. But fully autonomous driving everywhere at all times? I'm really not so sure.


I think self-driving cars are like fusion reactors, they're perfectly possible on paper and, like, totally 15years away from now.

There's a certain similarity but I don't know any paper that actually gives any assure that self-driving is going to ever be possible. The only theory is "we will prevent extraneous factors and then calculate".


So is that different to fusion power? We know how to build automobiles, and we know how to trigger nuclear fusion, but in both cases we are struggling to build the technology to harness these powerful forces in a safe and convenient way.


I find these arguments—about how a certain thing is arguably “impossible”—to be the most tiresome arguments on HN.

Everything that is possible today... used to be “impossible”.

If something appears “impossible” to you... there’s no information there about whether it is possible.


> If something appears “impossible” to you... there’s no information there about whether it is possible.

I agree that an argument from personal incredulity is generally not a good one. But that doesn't mean we can't demonstrate that things are very unlikely to happen. E.g., there's good reason to think that perpetual motion machines are impossible.

It's also important to realize that "possible" in a colloquial sense often doesn't mean "having an non-zero chance of happening before the heat death of the universe". When people are asking whether self-driving cars are possible, they clearly are asking with implicit constraints on where, when, and how.

In that context, we can have quite a lot of information about how possible something is. E.g., Elon Musk predicted that Tesla would have one million robotaxis on the road by the end of 2020. Rodney Brooks, AI expert and iRobot founder, thinks that's impossible, and I agree. https://rodneybrooks.com/predictions-scorecard-2020-january-...


Perpetual motion machines are not incredible because many people have failed to build them.

They are incredible because they violate the laws of physics.


I'm glad you now agree that one can indeed have information about whether something is possible.


> Everything that is possible today... used to be “impossible”.

That's a tautological argument. Obviously things that happened were possible by definition, but you're not accounting for all the things that were actually impossible which never happened and will never happen.


How am I not accounting for them?


IDK,

It seems like interested and substantial arguments happen arguments happened around what might or might not 10-30 years from now. Travel to other stars once was considered no more impossible than travel other planets and living on other planets used to be considered not much harder than traveling to them. We now can actually travel to other planets and we know now how dependent on earth-gravity, how hard it is to traverse the distances between stars and so-forth.

Which is to say, no, you're simply wrong, we haven't really progressed from everything impossible to something things possible. Just as technology has progressed very unevenly, our ideas of possibility have gone from a lot of things sort-of possible to something things quite possible and other relatively more unlikely.


Fusion research could have been finished in the 90s if we didn't drastically cut funding to keep it "15 years away" forever:

https://i.imgur.com/3vYLQmm.png

I don't think self-driving cars have this problem.


The difference is SDC taxis are already in operation in the wild. Maybe it's only in a small part of Phoenix but its happening right now. Full SDCs in all developed high density urban areas will probably be a thing in this decade and the next decade for sure.


There are millions of miles of roads just in the US that will never have that sort of infrastructure because the cost of keeping beacons running and marking in place would be astronomical. Sure maybe California will have some of that around big population areas but that's probably about it.

What happens to all these markings when it snows a couple centimeters?'

Fully self driving cars won't happen in our lifetime, probably not this century.


its not only lines on the road though:

Imagine you are following a pickup-truck and out of the back an obviously empty box floats out of the bed and lands directly in front of your car.

For a human its trivial to know the box is empty and its ok to hit it....does "AI" know that?

Multiply that case x1000 and you have the conditions self-driving cars will need to handle on a daily basis.


If someone drives around with loose crap in the back of their truck that flies out, the truck driver is at fault not the self driving car. This is a danger we already have on the road.

Humans make the wrong call on this sort of thing all the time. They also make stupid passes, yield the right of way at the wrong time, drive the wrong way down one way streets, cut each other off, fall asleep at the wheel, drive drunk, drive without their glasses on, get road rage, etc etc etc.

There is this sort of one-way lens when it comes to self driving cars. People want to throw up red flags about all things they might do wrong while ignoring the millions of stupid things that humans do to kill each other with cars every single day.


I don't think it's super relevant who is at fault, I care what the consequences are.

"It's ok if I get into an accident - it will be the other guy's fault" is only the right reasoning if you're talking on the individual level about about monetary costs of an accident only. If you're talking about injury, or if you're talking about the cost to society as a whole, they are bad consequences regardless of whose fault the accident is.

I think the actual answer is that self-driving cars will end up doing a good enough (i.e. at least human-level but not perfect) job of not wildly swerving or braking to avoid harmless objects like floating plastic bags that this won't be a concern.


> I don't think it's super relevant who is at fault, I care what the consequences are.

> "It's ok if I get into an accident - it will be the other guy's fault"

Exactly. Most accidents take two people to happen, one who makes a mistake and at least one more who could have prevented the accident as well. For example, when right of way is ignored by someone in a left yield right situation, no accident happens if the one with right of way brakes in time. Or, if someone fails to merge in time and runs out of road, someone else can prevent an accident by braking a little.


Right, except no one is claiming humans are perfect drivers but I have heard self driving car evangelists say countless times that self driving cars are going to bring an end to traffic deaths.


They will bring an end to avoidable traffic deaths caused by driving. They obviously will not prevent deaths caused by things falling out of trucks.


I think I can write a script what is able to recognize the exact weight of the flying object. I will calculate the rate of falling based on different observation like airfriction, rotation speed, speed decrease etc. With my model I can extract a probability number which I use to make a decision. Easy job. Opencv, tensor flow, some tracker software and php and of course a 16k 4000fps 3d camera will do the job. I am kidding. You are right.


> For a human its trivial to know the box is empty and its ok to hit it....does "AI" know that?

This is a great anecdote that definitely needs a source to back it up.

Primarily, there are a significant number of single vehicle accidents caused by drivers jerking the wheel instead of acting in a calm manner.

Secondly, there are many cases where a box is not safe to ignore; that could mean it damaging a fog light, a large staple in it hitting a tire, or it getting stuck somewhere, temporary loss of traction or visibility.

In conclusion; anything on the road should be treated as something to avoid, but definitely something to avoid a high speed collision with.

Other Anecdotes to consider: The first model S firs was caused by a trailer hitch in the road. Hammer hit a model 3. Asphalt coming loose in slabs and hitting a driver. Mattresses and ice from the roof in front of you. Tldr; There are many accidents that do happen with human drivers.


> For a human its trivial...

I anecdotally question this based on both personal experience and stories I've heard. It seems like it would be a hard problem for both humans and AIs, however AIs have the edge in the long run due to sheer processing speed.


AI also has a lot of training data. Humans are highly adaptable but most of us have limited experience with situations like hydroplaning, sudden tire failure, etc.

Also self-driving cars could have better sensors that don't have blind spots, and the multitasking ability to monitor all of them at once.


I always felt the film minority report was onto something with regard to this: the vehicles depicted in the movie are (non-optionally) self-driving within dense urban regions and drivable by humans in the country


I think I am going to side with VW on this. I have always been skeptical of fully autonomous vehicles, and I do not believe they will _ever_ exist on the roads that currently stand. Driving safely in all conditions without aid from a human is simply too complex a task for code that can be audited and verified. If some AI model that's been trained on a billion years of driving experience shows promise, but it is some incomprehensible black box of weights, I won't be getting in that car.

Autonomous vehicles will only ever truly exist upon infrastructure literally designed to aid them, greatly simplifying how they need to interact with the environment, thus making the problem tractable with code we can prove works. I really think it will take more than putting markings on existing roads. It is going to take new roads full stop, probably with various wireless checkpoints built into them.


You may not be getting in that car, but I certainly will.

After all, every driver on the road today is an incomprehensible black box where not only do we not know the parameters, we don't even know the function they're parameterizing. Every instance functions differently, and our testing procedures have woefully low coverage.


When one of those black boxes malfunctions it gets taken off the road. When the AI malfunctions, are we going to shut down entire classes of vehicles until the problem is confirmed fixed?

Not to mention that most software fixes cause other bugs...


In an extreme example I expect that's precisely what would happen. Consider what's currently unfolding around the 737 Max. In the automotive space there's a long history of serious flaws that resulted in loss of life, ranging from faulty airbag deployment systems to flawed designs for ignition systems.

We have precedent for how we qualify and evaluate things for safety: test them across a variety of conditions, accumulate driver-miles or operator-hours and incident frequencies. Then, using that data establish a bar for what constitutes an acceptable level of risk given the utility something provides. If we wanted to ensure nobody ever died in a car accident, we would ensure there were no cars, but collectively we've made a different choice.


We've made a choice to allow people to kill each other in cars from time to time, but that's different from choosing to allow automated cars to kill anybody. Knowing human nature, I don't think the general public will accept double digit automated deaths without an outcry.

Shutting down a plane is completely different from taking an entire class of publicly owned vehicles off the road. People will be furious.

Yes, they will be furious about the deaths and the shutdown, both. Don't forget that people are made up of individuals.


> When the AI malfunctions, are we going to shut down

> entire classes of vehicles until the problem is confirmed fixed?

Yes, a malfunction AI would have to be grounded, just like for example the Boeing 737 MAX is now.


Or a car model with severe problems. This rarely (if ever) happens because with that many cars, severe problems tend to be noticed fairly quickly. That shouldn't change with self driving cars. With several million miles driven each day for more popular models, even rare edge cases should appear within days.


> are we going to shut down entire classes of vehicles until the problem is confirmed fixed?

Yes of course we will. What is the problem with that approach? That is the exactly logical thing to do and will be done.


At least the software is fixable. Other humans are not.


At least other humans fear death as a consequence of driving incorrectly. Computers do not.


Some humans, when seeking to end their own lives, end the lives of others: https://en.wikipedia.org/wiki/Germanwings_Flight_9525

That's an extreme example, but automotive suicides that kill other passengers, drivers, or pedestrians fall into the same category. Consider also deaths from accidents involving drunk driving or fatigue -- thousands of motorists take to the roads every day modified in one manner or another that reduces their driving aptitude.

Also, while it may be correct to say that computers don't "fear death", there's no reason that "risk to self" can't be part of the criteria for decision making by an autonomous system.


Too many people still driving around drunk sadly counters that point.


Humans are fixable in that they are held accountable. One person can be taken off the road if they are unfit to drive. (Revoke license, imprison, etc). Then the incident has become Someone's Fault, and society can move on.


I’d personally rather have less death and mayhem than be able to blame someone for it...


That's a fine opinion, but it's the minority. Maybe not in the abstract, but as soon as you have unexplainable deaths (meaning there's nobody to blame), people freak the fuck out.


how do you see insurance working? why would drivers be responsible to have insurance when the car is completely controlled by bigcorp programmers?


I have to have insurance now, even though a lot of the functions of my car are controlled by their software.

Remember when the Toyota had that problem of the accelerator "getting stuck" because the software didn't disengage? Initially the owners' insurances were paying out, until it happened enough that they were able to prove it was Toyota's fault, and then Toyota had to pay them back.

I imagine in a self driving world it would work the same way. You get insurance, the car has a crash, your insurance and the manufacturer fight out whose fault it is.


Sometimes it seems that critics of something like self-driving cars want so badly for the project to fail that they themselves fail to see obvious solutions.

The insurance will work much better than it does today, because insurance in it's core is about spreading the risks and calculating exact costs of those risks, it's about calculating statistics of negative events and predicting total costs of such events for the entire fleets.

ALL parts of that equation are just better calculated if all cars were automatic, - you can better calculate number of accidents, you can see details of all accidents because there is blackbox data including videos, you can compare cars to each other because a Tesla with same hardware drives in exactly the same way as another one (which cannot be said for human drivers), they don't have to calculate for weird human risk activities such as drinking or being tired, they can run simulations of the same situation on the same software etc. etc.

Insurance is not going to have any problems, insurance is going to love it and make a lot of money on the self-driving cars, they are a perfect fit for each other. Insurance companies don't even care for whom do they have to pay to, they just care that the statistic of the number of failures is correctly represented and that manufacturers don't lie about those statistics - that is all they care about, they calculate a simple equation, that's all insurance is about...


Insurers like standards and features that can be easily verified and improve predictability of the crowd. The issue with high tech solutions, especially mono-cultures is malicious hacking or outages of central services that result in simultaneous failure. An insurer can't handle 50% of cars crashing in the same year.


Insurance would be a nightmare for a manufacturer. Every accident will initially be pegged to the auto maker (as it should be.. it’s their code!). The auto maker will always try to weasel out and blame the passenger-owners of the car (they didn’t maintain it, the paint was dirty and messed with the sensors, the tire pressure was 2 PSI lower than average).

And if you go with the “nobody will own cars, you’ll just summon one” model... well the fleet owner will just sue the manufacturer instead.


Just like Tesla blames dead drivers for using "autopilot." "They should have kept their hands on the wheel and been paying attention." No you can't have a copy of the data.


Seems like it would mostly be the manufacturers that would have to insure the cars, at least for the expensive part (liability)

For me? I'm a self-driving skeptic, but... if the manufacturer was willing to properly insure it, (I mean, a reasonable amount of insurance, at least a statistical life worth) I'd ride in the thing. I think that's an honest signal.


>... if the manufacturer was willing to properly insure it,

Its not just the manufacturers, who is underwriting all that insurance?

Ford sells approximately 2.3M vehicles per year, imagine if 50% of self-driving...over a 5 year period that 5.5M cars...if each one needs to carry a potential 1M policy thats an incredible amount of liability on someone's balance sheet. (even if you say the policy is only 100K thats still $576B in liability)

Thats only for Ford, add in all vehicles manufacturers and extend that to 10-15 years into the future and thats an incredible amount.

However there is nothing to say that a new laws won't be passed to allow manufacturers to escape liability. Most likely this is what will happen (see vaccine courts, etc)


But all of those cars are insured (and that insurance is underwritten) today. So the liability already lives on the balance sheet of insurance companies. Maybe the specific companies change...


eh, right now most people are massively underinsured; minimum coverage in California is like $35k, and most insurance companies won't sell you a plan with more than a half million of liability (at least not without an umbrella policy) - if we stop subsidizing driving through pushing costs on to victims of accidents, the cost of driving will go up. But yeah, it should be about the cost of a good umbrella policy+auto policy is now, modulo any savings if the self-driving car gets in fewer accidents.


but note, we're paying most of that already in the form of people who are killed and under-compensated by under-insured drivers. Increasing liability insurance minimums would roll that cost that is currently born entirely by the victim into the cost of operating a car, which is where it ought to be.


but it is some incomprehensible black box of weights, I won't be getting in that car.

You don't understand the complex weighted probabilities in your doctor's head either, but you trust them to diagnose cancer (which incidentally machine learning is beating humans at). None of the algorithms in doctors' heads can be formally proved to work in all circumstances, nor can the code that runs medical equipment.

A full understanding of complex systems (machine or human controlled) is not possible today in many domains, that's why we measure results. If the data shows that self-driving cars are safer, we will switch. At present, that's what it shows.

As to special roads/markers, these would make the technology less effective at dealing with the unexpected (crash ahead, moose on road, cyclist in the lane etc), and many of the leading companies don't think they are necessary. I can see cars forming networks which report danger, or adding more sensors, but don't think our roads will have to change for self driving, which will be prevalent within the decade IMO without infrastructure changes.


5g networks could greatly help bridge the technical gap. A good example of unforeseen consequences similar to smartphones and 4g transforming society.


I don't understand how a vehicle or any other real time system can rely on 5G. For example, electrical utilities have such a critical responsibility for matching supply to demand and maintaining exactly 50/60Hz that the landline phone network is not good enough, they have to maintain a private signaling network. Cellular networks are notoriously unreliable with dead zones, dropped calls, congestion, power failure, etc. Millimeter wave 5G is even worse with line of sight coverage zones measured in meters.


There is no one solution that is meant to not fail.

It is layers of redundancy. If one fails, the car continues to operate normally. If all are operating at peak, the car is near perfect. If multiple fail, it operates with somewhat degraded performance, but still markedly better than a human.

* Digital maps * P2P Networks * Human-reported obstructions and changes (Waze) * Machine-focused traffic markings * LIDAR * Cameras


"I'm going to agree with the side I already agreed with."

... ok.


What about dirt forest service roads? What about the dirt parking lot at that wedding you just went to? How will it find a parking spot? How will it get into a ferry where people direct you to the correct spot?

How will it deal with an accident up ahead where some drunk bystander is trying to direct traffic? How will it know to ignore the drunk guy? What if it isn’t a drunk guy but a sober person directing traffic? Does the car obey in that case?

None of those are edge cases because every time it drives it will encounter some novel edge case that has never happened before and it will have to perform better than a human.

Don’t even get started with liability. Once you take away the steering wheel the manufacturer is on the hook for every single mistake and every single accident. You’d be insane to be a manufacturer and sign up for that.

Sorry, but self driving cars are a complete fantasy.


How will it deal with an accident up ahead where some drunk bystander is trying to direct traffic? How will it know to ignore the drunk guy? What if it isn’t a drunk guy but a sober person directing traffic? Does the car obey in that case?

Most people wouldn't instantly know how to handle those cases either. Many people would obey the drunk guy. Maybe that's the right thing to do.

None of those are edge cases because every time it drives it will encounter some novel edge case that has never happened before and it will have to perform better than a human.

That's why these things could never be rule based, there are too many small exceptions. When they're not overfitting/able to memorize all your training examples, neural nets learn heuristics, just as people do. Different people learn different heuristics. Granted, they don't have much of the same context about the world that people do, it will take a long time to build enough examples for them to infer all of that. But Tesla's fleet is getting more driving experience every day than you will get in your entire life, and every time they train on one of those exceptions, the entire fleet will benefit.

Don’t even get started with liability. Once you take away the steering wheel the manufacturer is on the hook for every single mistake and every single accident. You’d be insane to be a manufacturer and sign up for that.

If drivers no longer carry their own insurance, this is probably going to be handled by insurance at the manufacturer level, and baked into the price. The insurance will demand certain processes to prevent large-scale bugs being rolled out.

I don't see any fantasy here, just a lot of work.


But Tesla's fleet is getting more driving experience every day than you will get in your entire life, and every time they train on one of those exceptions, the entire fleet will benefit.

Humans can reason about things that haven't to them happened before. Today's machine learning systems cannot. As you say, to react appropriately they must have been trained to do so using human annotated data.

The argument against FSD is that you would need an infinite number of annotated examples, and an infinite number of subroutines for behaving in any identified situations, because the space of driving is effectively infinite.

Until machines are able to do general reasoning about things they've not experienced before then FSD is not happening. By the way, Demis Hassabis thinks that this sort of transfer learning is the key to solving AI.


> But Tesla's fleet is getting more driving experience every day than you will get in your entire life, and every time they train on one of those exceptions, the entire fleet will benefit.

That’s not how machine learning works. You can “train on exceptions” as much as you want, and you have 0 guaranteed results. It can help, it can make no change or it can cause unexpected regressions.


It certainly can cause unexpected regressions, but of course that's being monitored during training. I'm not saying this is some real time learning system, nor that the system behaves perfectly on an exception case as soon as it's added to the training set. My point is that their engineers are getting an incredible amount of data on tricky driving situations to train their models on, far more than a human will see in their entire driving career, and when their model improves to deal with those, it improves for everyone. And at some point, these systems will be much better at driving than any human alive.


It’s one of great misunderstandings (or more likely - very effective way of getting money) to claim that machine learning is a data problem. We’re so far away from that point, that we have literally no idea what’s needed to make ML a data problem. Algorithms are extremely simple, and it’s all more or less curve fitting.

Media (and surprising amount of tech people as well) tend to claim that ML learning is like human learning - repeat something enough times and you’re done, you know how to do it. ML is no where close to that point.


You have any idea what the current state is of neural networks and their practical use? Most work only in very predictable controlled environments. Once you add one new ingredient things will fall apart. A 'complex' ai currently has less than 100 different things to deal with. And you need a hell of a fast machine to simulate millions of gens. In case of a self driving car you need approximately 1.000.000.000 different inputs. If you would the Moore law to predict the moment we deal with that amount you would have to wait about 500 years.


I do have some idea, I use them in my work. What are you calling inputs here? And gens?


> But Tesla's fleet is getting more driving experience every day than you will get in your entire life, and every time they train on one of those exceptions, the entire fleet will benefit.

Prove they are getting better. They run into the sides of trucks and off ramps on the freeway quite often (and don’t you dare blame the driver... it’s “full self driving” remember?)

Can you prove a machine learning algorithm does the right thing in novel cases it hasn’t encountered before? Nope.

And also, don’t reply with “well can humans”. That is a lame rebuttal. Computers will be held to an almost 100% non-failure standard before society accepts them. And that will never happen because of, well, reality.


The big reasons I want a self driving car is because I want to eliminate the boring drudgery and make the routine parts of driving safer. If I have to drive the last 1% myself, it's not a big deal. I suspect this is how most people feel.

I haven't done the math, but time spent on forest service roads, wedding parking lots, and boarding ferries is quite low overall. And the idea of driving in those instances doesn't bother me. Having the car take care of the other 99.999% of my driving live is what I do care about.

> None of those are edge cases because every time it drives it will encounter some novel edge case that has never happened before and it will have to perform better than a human.

If most of your driving involves drunks directing traffic, forest roads, and dirt-lot weddings then a self-driving car is likely not for you.


> If I have to drive the last 1% myself, it's not a big deal. I suspect this is how most people feel.

Which is all well and good until you realize a few things:

1) that “last 1%” happens every trip at any time. You will always encounter an edge case the machine cannot handle. Period.

2) as a result you have to always pay attention in order to immediately take over

3) you can’t because you (the royal you) are three sheets to the wind plastered drunk.

Sorry. If I have to pay attention for that 1%, it ain’t full self driving. And anything that encourages you not to pay attention 100% of the time is unsafe and shouldn’t be allowed in the road. And if I have to pay attention 100% of the time in order to take over, what the fuck is the point?


Not necessarily.

Take Japanese highway system. It is well maintained, has “rest” stations where you can park car for free.

If you could get an app that can drive drunk salaryman, or not even drunk just tired from Tokyo interchange to closest rest area to wherever.. you will win. Nobody will buy a car without that who uses car for highway driving, period. It is a killer app. Get off work at 9pm Friday, get the car to the IC punch the destination, wake up at a rest are 20mins from Ski resort, Onden, parents house etc

Snow, taiphoon coming whatever. Park at closest rest area.

I am very pessimistic about cars without steering wheels. I am quite optimistic about cars that have ability to drive well marked roads. Here is a crazy thing Charge fairs for highways like Japan does, then maintain them.


The safe disengage problem is solvable to match human level outcomes. Unexpected event? Slow down, avoid obstacles, flashing lights, slowly go to a spot that's ok-ish for parking.

Yes there can be a big hole in the road anytime, the AI has to watch the vehicle in front of it, if no such vehicle then it should chose a speed that let's it evaluate road conditions for the given weather/visibility.

Vehicle coming into our lane? The AI has to match human level maneuvering to evade the incoming car. It already has much better chance given it won't panic, will be always fully alert, and will be as accurate and precise as it can.

So the big categories are sudden road/environment changes (tree falls on road, hail, mudslide, earthquake damages road, animal crosses road), other vehicles, and pedestrians/cyclists/etc.

All are manageable with inferences from the environment (weather and roadside context determine visibility and how much space there is for maneuvering, how likely are unexpected crossings - eg. deer, kids) and surrounding traffic.

Are these hard? Sure, but none require human level cognitive reasoning.


> that “last 1%” happens every trip at any time.

Are you are going to randomly have to park in a wedding lot? The only example you gave which might happen in the middle of a trip, the Tesla can handle just fine for long enough to pass control over to the driver.

> you can’t because you (the royal you) are three sheets to the wind plastered drunk.

It's still the driver's responsibility to drive sober. Even so, I'd far rather someone who is drunk be behind the wheel of a self driving car than otherwise.


And if you have not been doing the driving, it will take precious seconds - too long - to assess the situation and react appropriately.

Either you're driving, or you're a passenger in a car with a driver who seems reliable, but really isn't totally so. Eventually, that will become a winning bet, but when?


Most realistic comment in the entire thread.


The "edge case" I worry about is driving on ice or snow.

Driving safely on frozen surfaces is not a solved problem for human drivers, but most of us insist it is a perfectly reasonable thing to do.

If the vehicle drives as slowly as it should in those conditions, it would probably frustrate a lot of people who really depend on their false sense of invincibility.


Ice or snow should be WAY easier for a self-driving car than a human (especially one with no or little experience).

A car has access to all 4 wheel sensors independently, it can apply brakes on each 4 wheels independently. It can always turn the steering wheel in the right direction, and it wouldn't panic.

Also, it would always drive the 'right' speed limit... sure, other human drivers might get annoyed, but assuming the true 'full self driving' future happens, there shouldn't be many of them on the road in time anyways.


> Driving safely on frozen surfaces is not a solved problem for human drivers, but most of us insist it is a perfectly reasonable thing to do.

What does that mean? That it actually is solved by humans? Or that it's not, and we just take the increased risk? If the former, then we can automated it. If the latter, then the self driving cars can also drive at increased risk, opt-in of course.


I meant the latter, but "equally safe as the status quo" might not be sufficient here.

If we built planes that were only as safe as highway driving, people would be outraged.

We aren't exactly rational about this stuff, and we expect a lot more from machines we don't directly control.

But I like your opt in solution. Maybe the UI loudly complains about the risk of current conditions and sticks to <5 MPH, unless the user enables "never tell me the odds" mode.


Most people never see a dirt road all year round. Don't let perfect be the enemy of good enough (ie SDCs in urban areas).


> Not only will the tech get as good as humans

[citation needed]

It's possible, sure. But it only seems likely to me because I grew up in an age of rapid progress in information technology. History, though, has plenty of examples of technological plateaus and regressions. To people in the 1970s, it seemed obvious that by 2000 they could vacation on the moon. But the rapid progress of the space race quickly dwindled; the problems were harder than we thought and the rewards smaller.

The notion that we can make a computer as smart as a human is one of those things that seems like it will be just around the corner. But it seemed that way 50 years ago, too. E.g. HAL from 2001. It's perfectly possible that humans aren't able to make anything smarter than themselves. Judging by most of the software I use, we're barely able to make things much dumber than ourselves.


> We will eventually update markings and beacons on the roads to make it easier for the cars, implement networks in which the cars can talk to each other, and make special lanes for self-driving cars only, among other improvements that will make it easier for the cars.

I feel like the correct way to describe this future isn't "self-driving cars", but rather "personal autonomous trains." In est, the road system described here would just be a rather clumsy railroad network.

I interpret the goal of having "self-driving cars" as referring to the ability to have a passenger vehicles that can autonomously navigate (wayfind?) off-road, i.e. what the aim of the DARPA Grand Challenge would eventually evolve into.


It's my belief that self driving cars will, for a very significant time in the future, still need to make human control possible. Think about the vast amounts of rural roads -- dirt, gravel. Anyone who's ever taken a 4x4 out to a trailhead in the desert knows that sometimes those roads don't even exist except on a map. Washouts can be a weekly problem in certain seasons. But you don't start in the desert, right? You have to take highways at a minimum, and quite likely national interstates as well, to get to the desert roads.

Think about attending a festival or fair with grass parking. You follow a line of cars, pull up to a guy who's standing out in the field. He looks around and says, "Why don't you go park next to that red Toyota two rows over?" Sure, that part is not "on the road," but certainly I had to take highways to get there.

Maybe I, as an urban-dwelling American, only need functionality like this a few times per year. But there are significant chunks of this country and the world in general where this is part of daily life. Adopting fully self-driving cars without manual driving modes is going to take extreme amounts of change and adaptation, not only technologically but also culturally. I would recommend spending a few weeks in the deep country if you want to fully understand some of the difficulties in reaching level 5.

If the time scale you're talking about is on the order of 50 years, I could maybe see it. But I do think there will always be a need for personal vehicles with some level of manual control.

Beyond all that, however, this article to me seems like 90% clickbait. The statement merely was "Maybe it will never happen," and it was stated in the context of a discussion of the difficulty in reaching level 5 autonomy. But now we have articles throwing headlines up saying "VW Exec admits fully self driving cars may NEVER happen." Feels a little disingenuous.


eh, something that only works in cities would be pretty nice for more than half of us. If you need an off road vehicle twice a year, you rent one; we have that technology already.

(I mean, we're still a long ways away from level 5 in the city.. I'm just saying, something that was level 5 only on pavement and only in the city would be damn useful; and good enough for more than half of us.)


What makes you say this? Gut feeling? Or do you have any supportive evidence?

We’ve done self-driving as a POC for a decade in Denmark, and it hasn’t really improved much, to the point where we too are considering, that it’s probably never going to work in the real world.

Don’t get me wrong. Self driving already works, it just doesn’t work on roads. Roads where you’ll suddenly have a bunch of leaves flying around. Roads where the paint job is cracking, perhaps even missing. Roads where the street signs are old and faded.

In ten years of testing with some of the best in the business, we’ve had maybe two days worth of self-driving.


> We’ve done self-driving as a POC for a decade in Denmark

Who is "we"?


I agree.

new technologies are over-estimated in the short term, and under-estimated in the long term.

Decades ago our computers were "soon" to be voice controlled, listening to our speech and doing our bidding. That was a big load of hype. However, over time and below the radar it became true as computers first answered phones, then took limited commands in cars and smartphones and now it is basically true (without all the hype).

Also, I wonder if these kinds of comments risk becoming

"I think there is a world market for about five computers."

or

"640K of memory should be enough for anybody."


"I dictated this , to the quiet office. The word area isn't too bad."

[What I actually said was 'I dictated this entire comment in a quiet office <period> The word error rate isn't too bad <period>'. Built-in recognizer on a Mac.]


darn autocement.


> We will eventually update markings and beacons on the roads to make it easier for the cars, implement networks in which the cars can talk to each other

In fact, the new generations of Volkswagen Golf, Škoda Octavia, Seat Leon, and Audi A3 (released between October 2019 and March 2020) will already be able to communicate with each other[1] and with the traffic authority[2] in real-time to prevent road accidents.

Volkswagen Golf, Škoda Octavia, and Seat Leon are #1 best-selling car models in multiple European markets, and Audi A3 is one of the most popular and affordable premium cars in Europe.

[1] https://innovationorigins.com/nxp-makes-volkswagen-golf-cars...

[2] https://youtube.com/watch?v=AM_NjfZX-F0


Agreed, "never" is a ridiculously unfathomable amount of time if you define it on say our species extinction date.

I'm presonally bullish on fully autonomous transport because of the industry interest shown in it and the potential demand for it. The latter I don't see going away unless something comes around that makes it redundant.


I think a lot of people sharing this sentiment assume cars will always be there, or at least will still be relevant when we finally have generic autonomous driving nailed down.

If it takes 50 years to have a fully ready environment with beacons, networking etc., will we still be riding “cars” on “streets” ? I ‘d guess when we reach that point we’ll also have solved the “getting from A to B” in completely different ways.


Agreed, level 5 autonomy is an ever shrinking number of edge cases. Human drivers also have edge cases like being drunk or otherwise incapacitated by strokes, heart attacks, getting phone calls, old, tired, etc. We still allow them on the road despite this and the notion that those things are some of the root causes of the many deadly accidents each year. Once autonomous cars are obviously safer than that, it will become the norm. We're not that far off from that. First we'll see mass deployment of level 3 & 4 first with safety drivers and when it becomes clear that those are a liability at best, also without. From there to level 5 is a matter of semantics since we'll basically have vehicles driving themselves most of the time.


I don't even think level 5 at individual car level is needed. Tesla can operate their fleet like a Amazon does with their warehouse robots.


”We will eventually update markings and beacons on the roads to make it easier for the cars”

We have already done this, they are called railways. Some of them evening have self driving trains.

To get self driving cars on roads you need a human level AI. Trying to get away with less intelligence by restricting the environment will never get to the point where it will be safe to have automated vehicles - you can’t provide infrastructure clues to the vehicle to help it tell when a child is about to run out into the road for example (and there are many other examples too) so you would have to segregate them physically from everything else and then you might as well stick to railways


"Human level AI" is a tricky wording. Sure, we may never get a program we would trust to serve in a jury, convince us about existence of God or just have a casual conversation with. But ask Go players how is that resistance to machines going.

When I drive and there's a child walking on the sidewalk, I'm not analyzing the chances the child will jump onto the road. I'm just assuming the parents have succeeded at explaining how to not kill yourself, without that assumption I would go crazy with too much things to worry about. AI does not get crazy, nor even tired, with too much tasks. It might usually come up with the same result that I always apply - just drive slowly through roads with children on sidewalks. But it might do something I won't do - slow down even more if the child seems to be doing something suspect.

One of the problems with Google-designed self-driving is that it goes slow and refuses to go if uncertain. I imagine that's why those cute little cars without driving wheel were axed. Even if the system has driven a bajillion miles without inflicting any risky situation, it won't sell if it can't guarantee doing your commute in your usual time. But it can't assume the same risk you take every day, that if that child behaves extra stupid and law misfires, then you might find yourself traumatized and without a driving license. Because when AI loses that license, it's killing the whole business. So, the bar for safety must be higher, slowing you down to a fully-controllable crawl, or constraining to high-safety situations. Hence, you get all those highway-assist features all over the place.

Disclaimer: I work in Google, but have no insider information about Waymo. Just remember these marketing materials back from the Chaffeur days and still think that was the way to go.


Or we can allow Google to have the same amount of accidents as regular drivers and not penalize them if they don't go out of certain margins. let's say if the vehicle has 1 accident on every 10m km travelled - google can go on. Sure, some people may die, but each accident will contribute to safety of the system. So called "Antifragile" property.

Then you give users of the service a choice: maps where service was very reliable, where some accidents happened or where not enough data was gathered i.e. there can be problems. Let users decide where they want to go and what level of risk they are willing to take.

Maybe we can have a device (smartphone?) that serves as marker for vehicles and helps them identify people/dogs/properties around them. If you ask me this can be done even now.


But we can't allow Google to have the same amount of accidents as regular drivers, because we are not allowing them to have the accidents in the first place. "Everyone" breaks the code and the police is not enforcing unless some special conditions happen (e.g. the infraction is in front of a policeman or a well marked speed trap). The code generally is designed to prevent accidents, e.g. by stating that if there is a child nearby a road, you should slow down. But how often do you see drivers going significantly below the limit because of pedestrians on the sidewalk?

Now, drivers take these risks because they subconsciously learned that the probability of accident is vanishingly small, while probability of being honked at for going anything below the limit is rather large. Thus, they subconsciously balanced the expected reward/punishment to lean to the direction of taking the risk. But AI is always aware of the risk and very well able to calculate it. Now imagine the headlines "Google's AI intentionally kills X children per year" and the regulatory reaction.


You just need to slow the cars down enough to increase safety drastically where the environment is not adapted.

Having cars move slowly but steadily does not make trips longer on average, at least in areas prone to traffic jams. Even though human drivers can find it frustrating, it's actually the opposite. If vehicles are consistently slow enough, you can even get rid of traffic lights and stop signs.


"May never happen" is such a cautious statement that to argue against it means "will certainly happen", and that's a pretty extreme position to take -- so extreme I would call it techno-utopianism. I'm still waiting for my flying car, but you are already promising me a thinking car, when there are so many obstacles, each of which could sink it:

- researching it may not be feasible

   * because it becomes a waste of engineering resources to continue the research

   * because almost self-driving cars may be good enough

- even if researching it is, it may not be economical because

   * political resistance or liability laws prevent mass rollout

   * the cost of building and maintaining required infrastructure is too high

   * not enough car purchasers may want it


I don't think the hard part of autonomous driving is recognizing the markings on the road. It's inferring what other people on the road want to do next. Including pedestrians and cyclists. Autonomous driving is a lot simpler if you only have to deal with other robots.


> I am skeptical that full self-driving cars will happen in the next few years, but he is completely wrong when it comes to the long term

I think when someone says 'will never happen' they mean 'not in the foreseeable future'.

Obviously the environment, usage and science can change such that full self driving could happen. I mean in the 1600s nobody could imagine a Boeing 747 flying loaded with hundreds of people, sure.

But the hype around self driving (by all the dreamers) has been that it's more or less 'right around the corner' not in 30 or 40 years or even 20 years.


Lets assume the environment does meet half way.

How resilient would such a system be.

How easily can it be attacked to fiddle with the ability of the car to navigate safely?

Also, and much longer term, isnt it kind of weird to think that human infrastructure can only be seen to massively scale i. The planet so long as we turn the planet into a cyborg?

Isnt it weird and wasteful to build a planet wide cage of tech infrastructure for the evonomies of the world to survive and for human civilization to operate along the trajectory we are currently pointing?

Seems dismal to me.


I don't think self driving cars will be a thing until self driving friendly roads are a thing. Just one example is roads could have RFID embedded everywhere. Maybe that specific idea is bad but that's the kind of way I think we'll have to approach this. If we start embedding RFID as we're repaving roads today, in 5 years the roads where most miles are driven will be equipped. In 20 years basically every road is done.


> We will eventually update markings and beacons on the roads to make it easier for the cars.

Or even easier, platooning. I don't understand why autonomous cars is a bigger thing than platooning. I mean, platooning solves 95% of use cases of self driving cars [1] and is orders of magnitude easier problem to solve.

[1] At least for me. I do not mind driving in the cities myself, but if I just could nap or watch a movie on the highway part, that would have some utility


If cars could assemble into close convoy trains on multi-lane highways / motorways / autoroutes / whatever, then it would probably also solve electric car range for many people - you only care about range on relatively long distance journeys, and you're likely to be doing those on major multi-lane carriageways. If you could split the air resistance between a bunch of other vehicles then you'd add considerably to the range. I similarly don't understand why we're not trying to solve that instead. As you say, it seems easy compared to full autonomous driving.


https://peloton-tech.com - I mean, that's just one company, I think there are a handful working on just the problem you're talking about.


Yes. Can you estimate how much these companies have received funding compared to self-driving tech companies/projects? My wild guess (based only on the general visibility) is that it amounts pretty much to a rounding error.


sure, but I think the truck convoy companies are run a little bit more like regular companies than startups? they have a reasonable near-term goal and sales pipeline. Like, I think if you have a couple modern trucks you can go have peloton install their system and you can use it right now.


Yes, but what we are missing is a platform[1] where I can join a random convoy going the same direction than me. And pay something for the lead car or offer myself as a lead car in case I feel like driving and earning some money. This would be in startup territory for me that would deserve more VC money than autonomous technology. At least just now.

[1] I assume also some regulatory developments for this are missing in addition to platform and universal tech kit for private cars. In case such platform already exists and all regulatory hurdles have been tackled, then I miss only widespread adoption and marketing...


Actually, I think there's a huge opportunity for the incumbent car manufacturers there. Considering how much identification many consumers have with their car brands, this could work out well (maybe even better) if it were only compatible with cars from the same manufacturer.

I mean, you could do a retrofit kit for regular cars, too... but that seems harder to get exactly right (I mean, considering the cost of a fuckup) and would require a bunch of new marketing infrastructure, whereas if Ford, say, just bought peloton and said "hey, make this work across our model lines" - well, that'd be a pretty good argument for buying a ford.


This could be solved for cheaper with better public transport.


There is something to this, in the sense that Palm (and others) couldn't get their machines to do complete handwriting recognition so Palm's solution was to have the computer and the human meet in the middle with Graffiti.

https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)


Heck. With Toyota building a city of the future. They might make self driving cars entirely common once you are free to design roads for them.


Yea they could put down visual markers on the road to make it easier for CV or ML models to be trained against, or maybe even physical tracks to mechanically guide these cars so that minimal software is needed. Multiple cars can be linked together for efficiency.


Yeah I think if car manufacturers can come together on standards for signs... And even maybe signs that send RF signals to cars to give them hints of what is nearby it could go a long way. Instead of pretending like the only way is to make cars work with our human friendly system.


A car that can drive itself in a specifically prepared environment would hardly qualify as "fully self driving". You could achieve that with all the AI prowess of a mechanical connection between the steering linkage and some guide rails. And you'd still fail to get even the tiniest development budget for a car that won't sell anywhere else.


If you'd build a genuinely smart city completely from scratch, the most elegant way would be to completely remove the need for own cars. But that's obviously not going to happen when its a project designed and financed by an automaker.


It is when you sell them the autonomous cars that people can use and maintain them with the guarantee that they wont breakdown often when used basically.


> but most forget to account for the fact that the environment will meet the cars part way

Multiple important stake holders seem to have significant incentives to make this happen. Local/state govs want less traffic jams and crashes, auto insurance companies would love to collect premiums and not pay out, Uber, Lyft would love to get rid of their drivers


I want an oceanside villa on the dark side of the moon but that's doesn't mean it will happen. Just because there is motivation to do something doesn't mean it is possible.


Of course it’s possible: we have billions of existence proofs that driving high speed vehicles successfully using sensors equipped to measure light and sound work. The question is how much it takes to replicate that system artificially.


I don‘t think high speed is where there is a problem. The challenges lie in the diverse environments where literally anything can happen.


but if you had the funds to, why wouldn't you?


Because there are million other better uses for that money.


sounds like you don't actually want it then.


You are so sure about this. I’m curious where you live where the streets are so well maintained, there’s very little non-car traffic, rarely road construction, no bad weather, no emergency vehicles, no overly narrow two-lane roads, no gravel roads, and electronic maps that are correct 100% of the time.


Modifying the environment sounds a lot like the the level 4 autonomy the exec said would be quite possible.


I like driving, I also like the assistance that modern vehicles provide on Motorways, however, would I ever fully hand over control? Probably not. A Boeing 747 has autopilot.. however 3 pilots sit behind hit to keep an eye on what it's doing.


But how many passengers are on the 747?

The ratio of pilot to total people on the plane is often something like 1/100. In a cars it might average 1/1.2 or something. I think that should be improved.

Also, on a plane, those pilots are never expected to jump in with 2 seconds notice because something isn't working right. I don't think that is realistic....people zone out, but that is expected in "supervised self driving".


Sometimes I like driving, but I also like sleeping or hacking or drinking, and it'd be great to have the choice.


I don't think our generation will ever let a computer take full control.


>>but he is completely wrong when it comes to the long term.

Never means never, not even in 145 million years. But he probably meant not in our lifetimes and with current urban planning. When all cars are self-driving it will probably be better


You will never, ever, have only self driving cars on the roads, unless you're talking exclusively about reserved motorways. You'll always have, at the very least, pedestrians and cyclists.


No way. The tech and liability will be too expensive for many years, at least till parents run out.

My guess is that interstate type highways will be instrumented for trucks and cars will benefit.


All of this discussion on smart infrastructure is moot, imho, because it won't be guaranteed to be there on all roads and cars will have to be designed to work even when the smart infrastructure breaks down.

So this means that self-driving cars will have to safely handle these cases and that also means that it is likely that cars will have to still be able to be driven manually.

Bottom line: self-driving cars will have to handle absence of smart infrastructure (in which case do we really need that infrastructure? I think we'll still need it, though, to guide and improve traffic)) and/or cars will continue to be driven manually at least some of the time.


I dunno. Americans seem to want these giant 4x4s even when they leave the city twice a year; they'd be better off (certainly safer... most trucks are more dangerous for other people and for the occupants) if they had a little car for city driving and rented the off road cargo hauler when they needed it.

A city-only car would be totally useful for people who live in cities; you just rent something when you want to go to the boonies.

Heck, most BEVs are that way now; I've got like 120 miles of range on mine, which is fine almost all the time. the two or three times a year I need something with more range or with more cargo capacity or what have you, I borrow or rent.

I think that the economics of the first level5 cars might be similar to current BEVs, in that you can only go where there is infrastructure. which is where most of us go most of the time.


Sure there are niche markets where fully self-driving and self-driving only cars may make sense.

But they still need to handle fault cases: e.g. they cannot only rely on a beacon sent by traffic lights because that beacon or the whole traffic light might out of order. So, imho, while smart infrastructure may help self-driving cars and traffic management it does not allow you to avoid the "hard work", which to make sure self-driving cars will behave safely and reasonably completely on their own.


eh, my argument is that as long as they are intelligent enough to safely get to the side of the road, I think there's a very large number of people who would be okay with a car that "broke down" every 10K miles if it was otherwise great. (heck, I've been in some sort of breakdown or accident in an uber more often than that, even not counting app failures, so people would probably tolerate a lot more than that if the things were really self-driving.)

Note, you still have the safety problems to solve. You need to know how to get out of the road if the road infrastructure is on the blink. I'm just saying that "I don't know what to do so I will pull over" as long as it doesn't happen too often, is an acceptable answer.


more to the point, for "full self driving" even with infrastructure, the "don't hit unexpected pedestrians" technology needs to get way better. You probably can regulate transponders in cars. You probably can't regulate transponders on children.


> All of this discussion on smart infrastructure is moot, imho, because it won't be guaranteed to be there on all roads and cars will have to be designed to work even when the smart infrastructure breaks down.

Also, a vast majority of the world doesn't even have proper roads like many western countries do.

In many parts of the world a road is not even paved or asphalted, and let's not forget what is actually making use of that road.

Seeing animals on roads might be a rare sight in a western country, but in much of the world, the road is shared by more then just passenger cars and trucks.


You’ve described level 4 automation, not level 5. So you’re agreeing with the VW exec (for what it’s worth, I also agree with you and the VW exec)


Personally, I think Drones will meet self driving... Easier to have a system without need of "markings on the ground" when things are flying.


Wouldn't be simpler to just develop smaller trains?


Smaller trains with bimodal traction. Because steel on steel is terrible at spontaneous braking, hence the significantly bigger gaps between consecutive trains compared to the gap between consecutive cars or trucks. And because steel on steel is terrible at the long tail of events that would cause contingency diversion, be it maintenance, disasters or organizational confusion. And because it would neatly solve the last mile (well, dozen miles).

Take an electrified rail network, pave it over to resemble a tramway track and add on/off ramps wherever they might be useful. Figure out an economic mechanical design that would allow a computer to precision-drive along the rail for on the fly mode switching. Which is an extremely limited task scope where computers would excel, very much unlike the almost-AGI requirements of full self driving. Mandate strong requirements for access to that network including a small minimum range of battery-autonomous operation so that you don't have to reach the atrociously high number of availability nines a conventional rail network needs to avoid total schedule collapse.


> hence the significantly bigger gaps between consecutive trains compared to the gap between consecutive cars or trucks

A lot of this is because railway signalling operates on a brick-wall principal: it is constantly assumed the vehicle in front could come to a dead halt instantaneously, whereas most road situations assume if you can match the braking performance of the vehicle in front with some margin.

The railway case is safe for the trailing vehicle in the situation there's a concrete block on the track ahead, the road one is not.


> A lot of this is because railway signalling operates on a brick-wall principal

That applies to old style -though still in common use- signalling. CBTC (every railway should use, but very few uses it) has similar, or even tighter, margins to road signalling.


No, the gains from CBTC come from two factors: one, it being a moving block system, rather than an absolute block (really you can think of a moving block system as an absolute block one where the length of the block approaches zero), so spacing matches the braking distance required; secondly, as with many modern in-cab systems, it is down to the individual trains to compute their braking curve, rather than the length of the blocks being dictated by the worse-case braking performance of any stock on the line.

I'm also unaware of any freight or mixed traffic application of CBTC, which makes it a stretch to say every railway should, though plenty of proven in-cab systems provide many of the same benefits (and you can decrease block-length substantially to get much of the way there).


But that not just some weird quirk railroad engineering clings to because Musk has not disrupted them yet, it's a consequence of braking performance.

Drivers feel safe to attempt brake matching (they fail often enough) because road code assumes that you never go fast enough to make stopping distance exceed visual range. Even if that rule is routinely broken the brake-match distance stays comfortably within visual range (stoplight waves travel upstream). In rail, everything happening within visual range is basically too late to even bother and this is entirely a consequence of braking performance.


> In rail, everything happening within visual range is basically too late to even bother

Not really. EMU passenger trains, such as subway trains, do have good acceleration times--good enough that it's limited by passenger comfort, not by physical hardware. This limit is about 1 m/s², with emergency brake conditions reaching 3 m/s² (note that the latter does imply several passengers are going to be nursing injuries--there's no seat belts after all). That's roughly comparable to typical passenger vehicles.

Freight trains have much longer braking distances, but that's a factor of 10,000 tons moving at 50mph has an insane momentum combined with relatively few axles being able to contribute to stopping force.

The main reason you need large distances between trains: switches. To control where a train goes requires moving a physical piece of infrastructure at the switch. You can't move the switch until the previous train clears it, and you don't want to let the subsequent train reserve a path over it until it switches into a new position--if the switch gets stuck in the middle, the train derails instead (or worse). The "brick-wall" principal follows from this situation.


> .. and will be a niche hobby on race tracks

Just like horses. Am sure there was a time no one ever imagined cars would completely replace horses on roads.


There are still horses on roads sometimes. I passed through a town recently and saw two separate horse drawn carriages on the roads.


Isn't that exactly what the article said? Lvl 4 on good prepared roads - possible, lvl 5 go anywhere you want - maybe not.


My question about this idea is always: what about pedestrians? Sure, you can use beacons so that cars won't crash into each other, and use signs to avoid collisions with fixed obstacles. But interactions with pedestrians often require guessing at their intentions, and you can't have them wear beacons at all times. Of course, some areas are already fairly hostile to pedestrians, but we don't want self-driving cars to make that problem worse.


The cost of that infra is so huge, it might be easier to imagine low-altitude self-flying pods.


Update markings and beacons? So far just mere road repairs are poor and take ages to be done.


I am generally of the philosophy that all "disruptive" technologies end up looking basically like slightly modified versions of what they replace, mostly achieved by pushing at the regulatory environment a little bit. Uber/Lyft replaces taxis. Self driving cars begin to resemble trains.


What happens during major power outages though, which isn’t an infrequent event? Like the one that happened in NYC last year when you had citizens directly traffic in the dark? I’m struggling to understand how self driving cars would react to that.


Clearly all the intoxicated individuals who were passengers now become drivers... or it is edge case and edge cases don’t matter for... reasons... or something. Am I getting the FSD hype train right here?


Initially in this decade I fully subscribed to your view, on paper. Then I observed how things panned out and thought some more.

Here's a grim reality: at ~3 million per death in a car crash (that was a reasonable estimate of insurance cost, overall, a decade ago or so), with ~37k deaths/year in motor vehicule accidents in the USA for instance, that's roughly $100 billion / year — a mere 0.5% of its $20 trillion GDP. So I'm not holding my breath for public or private action at a massive scale (think that fracking alone was orders of magnitude more profitable for the US, and came with a strong geopolitical advantage to boot with).

Do the math for your country, $3M/death over GDP, it's usually negligible compared to "the big thing" that your local politicians and corporations keep talking about.

Even in Western Europe, where

- regulation is people's #1 method for solving everything and anything,

- "the value of life" is emphasized every other speech and publication and actual social security systems, free medical care, free education even, etc. (a few hundred bucks away from actual UBI, for real),

- companies could actually compete (Europe has 0 tech giant, but several big car manufacturers),

you don't hear a lot of political or popular or private (business) support for Level 4 infrastructure (L4: roads dedicated to self-driving cars, likely to kill ~1000x less than human-driven roads, not to mention the economic gain of time while commuting and travelling by road, which whether work or leisure is a net psychological gain).

Actually L4 is not even a "topic" in many such countries (let alone L5), it's a curiosity, a funny segment to wrap up the news. Even though L4 is totally doable NOW. What you actually hear is much fear about tech — as usual. That's about it for self-driving cars.

I have no idea why, it makes no sense to me, but even if rich cosy comfortable life-adoring 35h/week western Europe doesn't want it bad, I don't know who does/will, in the short/medium term.

The above "grim reality" is just my way of fishing for answers, really. I don't know. I'm just skeptical that self-driving cars are a thing that people or leaders (public and private) actually want. I hear much, much resistance to the idea and very little interest for the upsides from the mainstream. I sees smiles and eyes rolling, and 10 years later there is still no decent infrastructure to charge EVs except Tesla's — a foreign entity, by far the biggest promoter of it all, but can they do it? Can they reach L5 or politically negociate L4? Back to the above concerns, or absence thereof really, of the mainstream.

It's like space, basically: it would be incredibly little of the world's GDP to put massively more effort and shorten industrial-scale space activities dramatically — like if it's 30 years away at current rate, we could make it by 2030 really easily, without pushing it far (nothing like a war effort for instance). And the benefits are so immense it's basically stupid to argue against, the question is how to do it best. And yet it's still anecdotal in most countries budget, it's mostly just PR. Even as we speak, a "prime time" for space as a topic of (positive) interest for the mainstream. Go figure.

Self-driving cars, it seems, are met with even more political and social resistance than they are made impossible by idealistic goals, because the former is a current showstopper whereas our current technological capacities are not.


The current technology capacity is the showstopper and always will be. Technology works for humans not the other way around. Nobody will adapt our society to conform to self driving cars. It just doesn’t make sense....


As I see it, it's a case of

- ridiculous before the fact

- dangerous as disruption becomes real

- obvious in hindsight

I.e. a "paradigm shift". These things take time, from inception to maturity for adoption, regardless of tech. Usually about a generation: that customers and voters be mostly people born with the idea as an "almost reality" (after PoC, before mass adoption), that's what it usually takes to raise the S-curve.

Cars themselves weren't accepted or desired by most people years after their appearance, it took time to change minds.

But in some cases, it was much faster, like the web or mobile phones. I just hoped this would be a case of that.

(meta: I think it's totally OK to disagree, upvoting you for discussion as a shield against downvoters based on opinion)


Gotta love optimism...


If you have to modify the environment to make it easier for the cars, then you haven't achieved fully self-driving technology.


So unless cars can navigate without roads then they are not self driving?


Navigating off-road might actually be easier in practice for AI than navigating on roads, since off-road there are generally many fewer other cars to deal with.

As a rough rule of thumb, I'll call a car fully self-driving if, without any changes having been made to the road system to accomodate self-driving cars, I feel comfortable getting in the back seat, telling it to drive me to a certain house in a city a few hundred miles away, and falling asleep.


What you are describing is Level 4-geofenced solution. Select roads, where infrastructure in place (be it 100% of aphalt, but that is not the single possible road type out there)


> implement networks in which the cars can talk to each other

And sing kumbaya, holding hands together....

What about adversarial car AIs? Malicious actors, etc.?


It would be interesting to see those cars on these Parisian streets, with this level of dynamic hazard: https://www.youtube.com/watch?v=jojf3Ci2H3A

I reckon motorways could be handled easily enough, and basic dual carriageways and normal intersections, but once you start mixing up multiple modes in inner cities, some tough decisions need to be made.


We could have them tomorrow if we could magically ban human drivers overnight. Humans are the only thing making it so hard. And the problem is that until we have 100% artificial general intelligence, that can introspect and understand the human psyche the way another human can, there will always be an intractable tail of cases where AI will fail.

So it's like the IPv6 problem. If we could all coordinate at once, it would be easy-peasy. In reality, it's virtually impossible.

Edit: a commenter pointed out that pedestrians are also a major problem; even in a far-fetched imaginary scenario I don't know how you would remove those from the equation.


Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: