We could also imagine a world in which more subtle attacks could reliably and fairly quickly cause injury/fatality accidents, which might appeal to terrorists.
That is a privacy nightmare without regulation on scrubbing extraneous information, done before upload: for example blocking out of pedestrian data; not just faces but clothing and gait, and probably also any data captured through windows.
It would be ideal to have a limited view of just the roads and signage, and have a retention plan that gradually keeps less and less historical data.
For accident review more of the data might be required, so vehicles should keep the last 24 hours of raw data.
Having a central database capable of being scraped and process to determine where any person is at any given time is a non-starter. Care needs to be taken to scrub all extraneous data from the fleet's network.
Convenience wins out over privacy for me. It’s the same for the billions that elect to use Facebook. I don’t care if Tesla’s knows I visited the supermarket.
As a pedestrian being observed by AVs, your [valid] choice has a significant impact on my ability to choose privacy over convenience.
I don't have link for the story. But idea behind this is not that one should not cheat on his wife because he will get caught. To be really a human and be honest, one must not have an urge to cheat on his wife(husband) because he loves her. If someone has dirty thoughts and only thing that is preventing a person from fulfilling that is that "Tesla will know"...
People should be able to cheat, they also should be caught, but being good only because you are constantly watched?
This argument looks like "because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say".
You have a choice to not use the autonomous car, to not use the apple watch, to choose a privacy-conscious internet carrier.
If there are only autonomous cars that track your position how are you going to make trade off?
Right now I don't have Facebook but no one in my family or friends is contacting me. Why? Because I don't have Facebook. This is my trade off. Real trade off would be if I could select different provider and still be in contact with my family and friends. Right now it is monopoly degrading my quality of life. If I would have different options that I could for example pay for but they would not use my data as tracking that would be different.
If there would be way to pick your autonomous car provider by price/privacy it can be trade off.
If it is one option that you can use or not use it is not about trade off. I am not native English speaker but trade off for me is when you have multiple options to choose. No that you have all or nothing....
The problem is self-driving vehicles implemented with fleet data that vacuums up everything will spy on everyone, not just the driver.
They need to be implemented with on-board redaction.
Otherwise there is no choice for anyone, the companies or the ruling party can do facial or gait recognition on pedestrians and other drivers, and scrape license plates of of all parked and driving cars from the data stored in the cloud.
With just Lidar I imagine this is less of a problem, but at least pedestrians in that case will still need to be redacted for gait recognition.
My first comment wasn't about your regard to personal privacy, it was with regards to everyone's. One person's data is mostly worthless; everyone's is priceless.
Edit: Health data is also end-to-end encrypted (accessible by your devices only, Apple servers don't have keys, if you forget your passcode you lose the data); https://support.apple.com/en-ca/HT202303
That sounds like it's taken right out of an Orwell story.
I'm also active in trying to wrestle more control of the data being collected on me.
And I am not willing to give up my privacy for your convenience and not happy with those who'd willingly give up theirs in a way that sweeps mine up with it.
I don't want to disappoint, but the future will have no privacy whatsoever.
Continued encroachments on privacy and personal autonomy may indeed deteriorate them, even cripple our ability to protect those rights, but simply worrying about that doesn't mean I must go gentle into that good night.
If you can be photographed legally in a public place, what's going to protect your privacy in public?
Any number of parties could lawfully and unobtrusively start slurping up all sorts of data which is in a kind of plain view, and nobody would really be the wiser.
That's a terrible design.
The obvious way is that the sign/beacon itself pings a server every 5 minutes, and when that stops you know it's broken.
Im also curious how situations will be handled where there are two speed signs: the regular posted speed and a temporary speed for construction. How does it know which to obey.
I personally don’t think it’s possible to make automated cars work well without a ton of infrastructure support.
People do destroy yield signs and other safety critical road infrastructure now. I've known more than one person with an appropriated road or safety sign as a home decoration.
Really depends on where you are. I agree I can't imagine it in the USA, but in the Netherlands, where every square metre is documented, attributed and zoned (yes including the leftover grassy triangle bits between highway ramps, everything). We could totally do that if necessary.
I don't know if documenting all the traffic signs would be the right solution. If anything I would imagine this database to include way more virtual traffic signs than are actually there. Not for the legal traffic rules, but virtual ones that would be nice if they existed and everybody in the flock held to them.
Point is that worldwide there is a huge variety in the quality of roads, the quality of signage, driving culture and attitude, and the general predictability of the environment. Some places will lend themselves more naturally to the first forms self-driving cars will take than other places that are more "free form".
It's a bigger expectation to suppose that the car will perceive the environment better than a person would and make correct on-the-fly decisions about traffic signs when snow obscures the sign and a little bit of ice and muck obscures its cameras ever-so-slightly, making some of the sensors go half-berserk.
Then it should do the same a person is required to do in that scenario - slow down and exhibit caution. Otherwise you will get a car that confidently drives forward because it "knows" a sign is there. If the sensors can't cope with that then the car shouldn't be on the road at all, period.
>>Why shouldn't local authorities, state DOTs, and the national DOT be obligated to also update a database that self driving cars use?
Because the same local authorities don't even have the budget, time or competency to fix the most minor issues with our roads. Potholes go unfixed for weeks, there's no budget for cleaning, for salting, for repair of missing signs, lamp posts or simply for review whether existing signage is actually appropriate after changes they make. But yet the same authorities should be tasked with real time updates to some database of signage? I'm sorry, but I'm just trying to be realistic here - we can write legislation to require authorities to do something, but in real world that's just not going to happen reliably enough to trust it. I know I wouldn't.
FWIW I am a near-term self-driving car skeptic and have been for a long time. I just think that these are not the kind of issues that really pose a major obstacle whereas drunk people wandering across the street are.
The only way I can see that working is if there is some kind of geographic location of various stops and the like, but at that point, you need consistent connectivity to obtain that kind of data, right? May work in larger towns and cities, but what about rural areas?
In a wired world the distance to everything is zero. If you can find an 'in', you can carry out this attack in any place in the world, from the comfort of your nerd cave or barracks.
Fortunately there was almost no traffic.
Signs get vandalized or stolen all the time here in Uruguay. Mostly vandalized.
This is why we have security. And honestly modern cars are so computerized already I think your argument applies to them as much as autonomous cars.
Yes, we have security. Yes, cars can (probably) already be hacked, perhaps even en-mass rather than in targeted ways.
But: one the big advantages of machine learning in cars is that every car can learn from the experiences of every other car. That makes them monocultures. Monocultures are fragile. You find the weakness of one, you find the weakness of all.
I want the benefits of the former without the risks of the latter. I don’t know if that’s even possible.
That's where competition comes in. Just like AMD vs. Intel or OSX vs. Windows. If you find a weakness in one, you don't necessarily find a weakness in the other.
Hence, finding a weakness in Tesla doesn't mean it will work against Waymo or Uber.
Cars are already getting hacked.
The whole point of the post is to highlight how car manufacturers have no concern or competence for security, is will be like the boeing scandal but much worse.
At worst, youre a troll. Cars are no where being hacked at the scale or magnitude that affected the numbers involved in the Boeing fumble. No where near. Not even remotely close.
When did the whole OnStar botnet start? I think it's earlier than 2010.
as of 2017 it is no longer legal to sell a car with no backup camera, and 99.9% of cars implement that with a digitized head unit that, wait for it, connects to the internet.
thus, very new-manufacture cars in 2020 do not connect to the internet, bluetooth, sometimes wifi, etc.
That's a sunken cost fallacy. Only because other things are hackable as well doesn't mean there is no danger.
> And honestly modern cars are so computerized already I think your argument applies to them as much as autonomous cars.
I've said that in my comment above as well, but it was in an edit before you replied so I guess you didn't refresh before submitting the comment.
It's really not a sunken cost fallacy. It's good evidence that the "think of the terrorists" angle is overblown. There are plenty of other problems with self-driving cars, but not that one.
Eventually being better than humans seems like it's obviously going to happen. Not driving drunk, not getting distracted, and not sleeping at the wheel alone are advantages enough that I would be amazed if the balance doesn't eventually fall in favor of the self-driving car. Self-driving cars don't even have to be particularly good drivers to be better than the average human, given how much the human average is dragged down by recklessness.
Tell that to the jury when your self driving car runs into the side of school bus full of kids (which will happen given any reasonable adoption).
Self driving cars would have to be held to an almost insanely high standard to be “winnable lawsuit” proof.
A car manufacturer can already be sued for things that go wrong, in a car, but the world keeps spinning.
> In the 4th quarter, we registered one accident for every 3.07 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.10 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.64 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles.
It's pretty unsurprising that at least augmenting human attention and input with machine attention and input reduces accidents. I agree that the cross-over point in time for full automation being safer than human drivers is a total unknown though.
There were some stats that if you compare model S deaths to other luxury cars in the same price range they are about 3x higher for the S. Death rates in luxury cars are much lower than the average vehicle.
I think self driving tech will cut roads deaths eventually but it needs more work.
So even if it's just on highways, Autopilot is still out-performing humans by 2-3x, if not more.
On average, ignoring better luxury car rates mentioned elsewhere, and that I have no clue if this is representative. I would be surprised if broader data was so much worse that it would reverse the relationship though.
The pool of "all vehicles" traveling on the road, comprises very old cars and pickup/trucks/vans, and all kind of drivers, including teens (or however inexperienced drivers) and older people that may be more likely to have slower response times or some other condition, like (say) poorer vision.
Besides the fact that Tesla's are at the most a few years old and being on the pricy end (which should imply that they are properly maintained), they are "sport cars" (in the sense that they have very good handling and breaking) and they are driven (I believe) by a certain subset of drivers, relatively young but with no or very few inexperienced drivers.
So even if it's not as good as the healthy-and-wealthy bracket that might be safer, if it's better than the average then it'd be potentially significantly better than the non-healthy-and-wealthy. In that light this seems like a massive win.
And - to be fair - think "Sabrina" (the movie with Audrey Hepburn), really rich people traditionally had professional drivers (chauffers) which maybe had an even lower rate of accidents.
First of all. Tesla counts the number of miles for every Tesla being involved in an accident. The other figure you quote is for all miles driven by motor vehicles before getting involved in an accident. Given that accidents tend to involve two or more vehicles the number of miles traveled before an accident involving a Tesla without autopilot or safety measures would be closer to 820.000 miles.
In that figure of 479.000 miles commercial traffic is also included. Commercial traffic makes up around 60% of all accidents. We cannot translate this to miles per accident comparable to Tesla, because commercial traffic tends to drive more miles than passenger vehicles do, but there are far less of them, etc. Another big category that needs to be excluded from the general figure is motor cycles, that generous source of donor organs, to make it comparable. Passenger vehicles in general are more safe than the overal figure and thus closer to Tesla's figure.
Second point is that Teslas are hardly part of the second hand or n-hand market yet. It is even a question if Tesla will tracks that data in those markets. In those markets you will see more young people as drivers (something to do with income). They are responsible for a majority of the traffic accidents involving passenger vehicles (something to do with tendencies to discount the future and to overestimate their own capabilities).
Third point is that the really good figure comes from auto pilot, but that only works in places and under conditions that are already far less accident prone like highways under normal weather conditions.
The good news from the figures is that enabling the safety measures make Tesla drivers better drivers: From 1.6 miles to 2.1 miles roughly a 25% increase in miles traveled before an accident. That would lead us in the direction of mandating level-2 automation in all new cars for more safety rather than trying to push for level-5 for some brands.
You're doing that the wrong way, aren't you?
If 479k is total miles across an average of two cars, then the single-car equivalent, the number you'd compare to the Tesla numbers, is 240k.
It doesn't really make sense to adjust the Tesla numbers to do the comparison, but if you did you'd be doubling them, not halving them.
> We cannot translate this to miles per accident comparable to Tesla, because commercial traffic tends to drive more miles than passenger vehicles do, but there are far less of them, etc.
Neither of those reasons makes them incomparable.
- Only Telsa: distance_travelled / num_accidents = v t / 1
- Two cars involved, but counted as one accident : distance_travelled / num_accidents = 2 v t / 1
There was a competing study done against cars with similar price/age/safety equipment that didn't have the auto pilot option and Tesla caused considerably more accidents.
It seems to me that knowing the autopilot isn't 100% perfect is a good reason to not rely on it in more dangerous or complicated scenarios (construction, roads with poor markings, school areas).
I haven't followed too closely here though, just curious if that is a reasonable hypothesis.
When you are comparing numbers. You compare like-to-like. Tesla specifically prohibits engaging Autopilot in difficult situations and encourages during long boring straight drives.
It's far too soon to claim we've hit peak performance.
We live on a planet around a midlife star. The amount of resources available to us over the next 10,000 years is staggering.
Autopilot doesn't need to be 2x or 10x better than a human driver -- it needs to be 100x or 1,000x better, or nobody (sane) should touch it.
The problem is not that autopilot is superior than your own driving in the vast majority of the cases! The problem is the 1 in a million times when it makes a trivial mistake that virtually no human would, and kills you instantly. Like driving under an obvious semi-trailer, and shearing your torso off.
An understandable example: You have two games of chance, both with identical 5/6 chance of winning.
One game, you'll play immediately (and as often as possible). The other, you would never play, not even once.
How is that possible? Both have identical 5/6 chances of winning? But, one is "ergotic", meaning that the odds hold over the long term as often as you play, and one is "non-ergotic", meaning that as soon as you lose, you're finished.
One is a single dice roll.
The other is Russian Roulette.
Self-driving cars are Russian Roulette. One "mistake" on the part of the system, and you're dead. The fact that the "stats" prove that it is safer, overall, don't change the fact: you and your family are dead.
Non-ergotic stats are not comparable to ergotic (group) stats.
Given a population of a million all would prefer a 1 in a million chance of death under a semi to a 1 in 10k chance of being smeared in a more average fashion.
Your argument is the same one used against seat belts with people advocating a strictly inferior survival strategy to prevent a rare misadventure.
Let's say those driver-hours would have resulted in 3 deaths of others (in vehicles or pedestrians) but your autodrive results in 1 different death (which wouldn't have occurred with the human drivers).
Your stats are no comfort at all to the loved ones of your victim.
It's like the Trolley Problem but the humans are unidentifiable.
I would always pick the most valuable to me side and pull the switch or leave it be to bring about the better end result. Given the availability of self driving vehicles that reduce mortality choosing to drive oneself and kill more people is an unethical choice that will be as little comfort to the 3 victims as the one.
I would always pick the most valuable to me side
Real life is complicated.
Why did you include the fact that they were killers fleeing police? Are, you supposing that I will somehow infer this in the 2 seconds I have to react or you just positing completele nonsense to muddy the waters with emotions?
Pedanticaly one would virtually never choose to hit pedestrians instead of cars because pedestrians don't have crumple zones or air bags.
The big question is should cars prioritize the owners health or the greater good and If the latter what is the greater good.
I don't think people will turn on a product that might decide to kill them to save a school bus so it's simplified to minimizing chance of fatalities while protecting the owner absolutely even if this risks others.
The only thing that matters is the overall odds there is no inherent mathematical difference between the 2 scenarios.
There is no difference to prefer more likely death in common scenario to less likely death in a stupider scenario.
If you believe otherwise you ought to explicate with numbers.
The 1000x better demand is more a stupid human illusion of control thing like fearing flying more than a road trip of equal distance.
However, unless HN implements K-Means clustering, a single-axis downvote mechanism will inevitably lead to echo-chamber bullying, I believe.
Especially if no-one fights back, encouraging the people who vote “sensible” vs “nonsense”, and discouraging the “like” vs “don’t like” voters who ruin debate.
It's easy to say they are better, when there are so few of them. (Compared to the amount of human driven cars)
DELTA DASH NOVEMBER OSCAR NOVEMBER OSCAR
Aren't sanity checks basically common sense? I would suspect they already play a major role in making autonomous driving a reality.
They'd have to know they were compromised first. I think that is a big 'if', and creation of a centralized repository of backdoor keys makes for tempting targets.
I mean, every technical system can be shutdown and will shutdown sometimes, isn't that a inherent weakness of such a system?
if that beacon does not respond disable automation for that part?
Just like bullying: bullying itself is annoying, but what I found annoying is the inconsistency of it. At some point, my bullies were friendly, at another point once more bullying. If it were permanently on-going and not sneaky, they'd be found out by e.g. teachers long ago already.
On the other hand, wouldn't a ddos be visible in the logs?
Logs should be checked centrally anyway, so a 30 second flood of messages should be visible.
But in the end, you are right that it will be hard to do this the right way.
We won't lose that capacity.
The overhead lane markers in Hokkaido work well, but would be an expensive retrofit elsewhere (not the poles themselves, but the foundations)
The nails aren't structural, only a signal.
Multiple nails mean the signal is highly redundant, depending on density.
Regular maintenance could detect weakening signal (rusting nails) and result in new nails being inserted.
Iron oxide remains (weakly) magnetic.
Treatments (galvanised nails, coated, etc.) could slow corrosion.
FWIW though, it probably is possible to use this sort of technology for buses and shuttles. You can’t fix markings on ALL the roads, but you can make sure the markings are good enough on main arterials to have a dedicated bus lane.
From one of my previous comments on a different post:
- - - -
Reducing the number of cars (and therefore traffic) on the
roads will benefit everybody
You seem to have a rose-tinted view of the world we live in.
Have you ever had to commute in less-than-ideal
Heavy snow? Sleet? Black ice?
Have you ever lived in places that are not
perfectly flat? or lived in places that
are hot that make bicycling unfeasible?
Did you have sporting gear / work gear that
you had to lug? Did you know some people have
to fetch their own gear to work
Did you have to take calls during transit?
Did you know its common practice for employees
to call into meetings during their commute and/or
help assist operations via conference calls?
Have you had to shop for more than a baguette
or a bagel at a store? You know how cumbersome
that gets for even a family of three?
Do you have the slightest clue how much casual
violence and crime happen on public transit?
However doing away with cars or vehicular traffic is just pollyannaish madness.
Teen robbed at gunpoint at Fruitvale, BART officer says writing a report is a 'waste of resources'
No they won’t. This is just willful self-delusion on the part of transit haters.
Most of your concerns are either just plain petty (like, just wear headphones if you don’t like hearing other people) or lacking in perspective (cars kill way more people than whatever safety concerns on a transit system you want to gussy up).
For sure. And I feel like they're going about it backwards by defaulting to maintaining norms of private property and leasing things starting with private/personally owned property and developing ways to lease it to for communal use.
They'd probably have a more viable business model if they had started with communal property and charged rent or usage fees for maintenance instead. Jitney cabs or dollar vans have been around forever, and their big challenge was figuring out how to do dispatch and routing in a way that didn't give people intolerably long wait times.
In peacetime, self-driving cars are a nice-to-have that can save many billions of driver hours and a few traffic fatalities. In wartime, they're literally a matter of life or death. The side that can handle all of their logistics without exposing their precious humans to enemy fire has a huge advantage over the one whose supply lanes can get picked off one by one.
There were of course other civilian concerns as well, but Eisenhower's arguments at the time of the plan primarily cited the importance of Germany's autobahn system during WW2 as a justification.
I am extremely skeptical that beacons, sitting out in the heat, and the sun, and the cold, and the rain, and exposed to whatever we use to maintain roads at that juncture, will not be "a wear item".
I do not think people have a solid grasp of how many millions of miles of road would need to be technified for self-driving cars to be widely viable.
Most likely, it's going to be a niche thing, tightly restricted to certain places that can afford to build and maintain the infrastructure.
If we're going to beacon up a road, they easily make the most sense.
The interstate system already has working self-driving, even without any beacons. Inherently-safe roads like interstate freeways are already almost solved, and you can already buy a Chevrolet today that can self-drive the interstate system 99% of the time.
It's all the dangerous roads that need the help. The ones with bicyclists and pedestrians and parked cars and uncontrolled access and such.
Curbs, signage, lights, poles, drainage infrastructure, lower lifts of asphalt and granular material, ...are all non-wear items. This doesn't mean they never need to be repaired, but rather that they tend to get replaced after unintentional damage. Off-surface beacons (what the original comment suggested) would be in this category.
And you say "private contactors will save the day", please, stop drinking the Ayn Rand Cool-aid.
Trains aren't exactly new technology, yet here we are with them still not functioning properly during what was actually a pretty routine snow storm. It doesn't leave me very optimistic for self driving cars, which will be significantly more compliced technology.
Footage of conditions in the city:
Hydro woes and 150km/h winds in the region:
Transit status snapshot:
School closures, Blizzard warning:
Interviews about the ice buildup:
Finally, a bit of fun; how Canadians deal with weather:
Self driving trains will require more coordination, with central dispatch to tell them when/where to go. That leaves the intelligence on the train much more basic.
They have signals to tell them “stop”, they have speed limits posted, they have route knowledge. All detectable by computers, even without changes. Fast trains have in cab signalling in any case.
They have no decision about where to go (signal box sets the points/switch track), they have no decision on when to go (signal goes green)
The only bit which may need human input is the “ok to depart” notification when all the doors are clear at a station.
There are hard limitations on sensing onboard the train. Tracks that bend around hills, etc. I have some exposure to wireless communication for trains, and one of the limitations is that you can often not get a line of site even from one end of the train to another.
Since railroad tracks rarely get up and go walkabout, you can instrument them (conduction, video, lidar/radar) to the limits of your preferences and/or budget. Information can be relayed to both central control and individual trainsets.
Self-driving implies intelligence, and fully-automated trains simply follow rote rules, and apply the emergency brakes if something unexpected happens. Not intelligent.
Put another way, self-driving has unbounded complexity, while a fixed number of vehicles on a protected, grade-separated railway is not very complex at all.
Cars are already equipped with suites of sensors giving them far more complete information than any human can process. Cars can already react faster and hold lanes with more precision than humans can.
What’s lacking is general intelligence. The ability to creatively respond to unexpected situations, even when it’s something you’ve never seen before.
That does not jive with your numbers (1 disengage per trip, roughly 10 miles), unless you're saying that Waymo's numbers are juiced by few city miles?
Call me back when waymo can drive in all seasons in random places anywhere in the US.
Doesn't have to solve all problems, only sufficiently large problems.
Autonomous driving is a marginal improvement on an already deployed technology.
0 visibility in snow happens often.
The full description of the system can be read at https://path.berkeley.edu/sites/default/files/advanced_snowp...
Alaska uses GPS for their precision plowing - https://www.truckinginfo.com/329914/how-alaska-dot-uses-gps-... . Note that to get the 2" precision you need high quality GPS receivers.
> The trucks have two GPS receivers mounted atop the cab. These receivers cost about $10,000 each, Shankwitz says. "That's probably why this hasn't been deployed in many other areas; it's just too expensive and most applications do not require that level of accuracy."
> The two-centimeter accuracy actually comes from a third receiver -- a high-precision, stationary ground-based receiver perched atop a microwave communications tower in nearby Valdez. It's accurate to within millimeters and it acts as reference receiver for the plow-mounted systems.
I don't see this being standard on self driving cars.
That said, ground penetrating radar is being looked at and appears to be a lower price point. https://phys.org/news/2016-06-vehicles-high-precision-advers...
However, I'd argue the lanekeeping and "where am I" problems this stuff solves is dwarfed by the common sense and logical reasoning & recognition problems.
What I think is that self-driving cars may also force us to confront ways in which real-world driving environments are inadequate, so that we can make them more adequate. For example, there are intersections where stop signs (or other signs) are present but not visible. Humans know they have to stop there, so they stop anyway. Self-driving cars could systematically find and report these locations, and might get the city to do something about them.
But you are right, road maintenance is never cheap.
It would probably be easier to have a swarm like national airtraffic AI, all cars flying, all cargo in blimps than to maintain a complex ground network of "smart roads"
All the road and infrastructure taxes being funneled by the politicians to, other areas as they see fit to placate their constituencies or finance their pet projects.
Same with out-of-control grossly over-budget projects that dont deliver the bang for the buck.
If we - as an electorate - insisted on superior paint or "marking technology" for surface roads, we shall have them in one form or the other.
That obviously has downsides too, but unreliable road markings would be a pretty silly blocker to ever having a self-driving car. that's a solveable short-term problem.
Cars today don't have any defense against people dropping bricks or pouring paint off an overpass, but somehow the system still works.
On the contrary, over the years I've read on many occasions how drivers and passengers have been seriously injured or killed by the morons who get a kick out of dropping objects from an overpass.
One bricked GPS system can attack an entire country. Or planet.
Having atom-resolution maps doesn't help a bit it you don't know where on it you are.
My 2006 car was fine driving underground for 10 mins with no gps signal, but got confused when I drove it onto a train and it moved 30 miles without the wheels turning. Then after a few minutes got a gps signal again and fixed itself. Had an option to manually set the position and heading too.
Driverless lorries that go from one service station to another will be the first fully autonomous vehicles I think.
Another possibility is Automaker X partnering with comms / infrastructure Company Y (e.g., Comcast). Put another way, if they can get to the point this is the dealbreaker then it's easy compared to what it took to get here.
"We can't even keep the bloody yellow markings on the road visible"
So although there's nothing grammatically wrong with the sentence, it reads awkwardly. A native British English speaker would be more likely to use your suggestion. It's just one of those barely documented curiosities like adjective order (https://www.theguardian.com/commentisfree/2016/sep/13/senten...).
Here's a bloody example: https://www.youtube.com/watch?v=OGWhjojt5dw
I live in a wealthy jurisdiction, and the road markings here largely are not what I'd consider to be "perfectly visible". I'd guess that currently, probably 3/4 of painted lane markings have one or more of the following issues: a) obscured by snow/ice, b) faded c) road under construction and lane markings don't correspond to current lanes.
Where the car would freak out and stop, you don't even notice anything is missing.
Between vastly reducing car traffic and separating it from pedestrians the problem of fully self driving is greatly simplified (probably not eliminated).
"I have this new auto-mobile concept which doesn't require a horse and which can go fast; I envision up to fifty miles per hour. It will require a smooth, hardened road surface, but that will be achievable someday."
"Forget about it. We already have millions of miles of roads, which are bumpy, made of dirt, and hard to build and maintain. Who will do the extra work to smooth them? Harden them? Maintain them? What if some jerk digs a hole in one as a prank? Maybe this could happen in a limited way in cities, but this is overall a pipe dream."
My point is, it isn't a question of whether this is feasible, it's a question of incentives. If the incentives lead society in this direction, it will happen.
The difference because the horse-to-auto transition and the auto-to-self-driving-auto is that autonomous cars solutions are inherently fragile and cars reasonably robust. (Flying cars are inherently fragile too - the problem isn't get a car-like-thing to fly, it's getting it to not hit things).
I could see autonomous vehicles working in a city, but not out in the country where a driveway might be an unmarked dirt path.
With the ongoing climate change and projected energy crisis, humanity may not have the physical resources to build and maintain self driving cars.
If somehow the energy problem is solved and climate change does not bring chaos, I can see true self driving car before 2100.
I'm sure it'll be possible some day, but I'm not positive that it'll be possible in the general case before we reach the Singularity. And when we get there self-driving cars are going to be a small side-effect of this unprecedented revolution.
I can't help to draw a parallel between these threads about self-driving cars where many people are saying that it's basically a done deal and we just need to wait a few years, and that thread I read a few hours ago about the Boing 737 Max re-certification being delayed once more. I know it's a bit of a fallacy to treat HN has a singular entity but when I read the threads about the 737 the consensus in here is that "it's a death trap and I'll never fly one of these planes ever again" but at the same time we're totally optimistic that the industry will have perfected self-driving technology in our lifetimes? The industry has been cutting corners on planes that cost a fortune and didn't manage to make them safe to operate in unobstructed airspace because of a minor sensor dysfunction but they're totally gonna nail the incredibly complex task of operating a 1+ton vehicle at highway speeds in much less controlled environment?
I can totally see an ever-increasing amount of driver assistance in the future. But fully autonomous driving everywhere at all times? I'm really not so sure.
There's a certain similarity but I don't know any paper that actually gives any assure that self-driving is going to ever be possible. The only theory is "we will prevent extraneous factors and then calculate".
Everything that is possible today... used to be “impossible”.
If something appears “impossible” to you... there’s no information there about whether it is possible.
I agree that an argument from personal incredulity is generally not a good one. But that doesn't mean we can't demonstrate that things are very unlikely to happen. E.g., there's good reason to think that perpetual motion machines are impossible.
It's also important to realize that "possible" in a colloquial sense often doesn't mean "having an non-zero chance of happening before the heat death of the universe". When people are asking whether self-driving cars are possible, they clearly are asking with implicit constraints on where, when, and how.
In that context, we can have quite a lot of information about how possible something is. E.g., Elon Musk predicted that Tesla would have one million robotaxis on the road by the end of 2020. Rodney Brooks, AI expert and iRobot founder, thinks that's impossible, and I agree. https://rodneybrooks.com/predictions-scorecard-2020-january-...
They are incredible because they violate the laws of physics.
That's a tautological argument. Obviously things that happened were possible by definition, but you're not accounting for all the things that were actually impossible which never happened and will never happen.
It seems like interested and substantial arguments happen arguments happened around what might or might not 10-30 years from now. Travel to other stars once was considered no more impossible than travel other planets and living on other planets used to be considered not much harder than traveling to them. We now can actually travel to other planets and we know now how dependent on earth-gravity, how hard it is to traverse the distances between stars and so-forth.
Which is to say, no, you're simply wrong, we haven't really progressed from everything impossible to something things possible. Just as technology has progressed very unevenly, our ideas of possibility have gone from a lot of things sort-of possible to something things quite possible and other relatively more unlikely.
I don't think self-driving cars have this problem.
What happens to all these markings when it snows a couple centimeters?'
Fully self driving cars won't happen in our lifetime, probably not this century.
Imagine you are following a pickup-truck and out of the back an obviously empty box floats out of the bed and lands directly in front of your car.
For a human its trivial to know the box is empty and its ok to hit it....does "AI" know that?
Multiply that case x1000 and you have the conditions self-driving cars will need to handle on a daily basis.
Humans make the wrong call on this sort of thing all the time. They also make stupid passes, yield the right of way at the wrong time, drive the wrong way down one way streets, cut each other off, fall asleep at the wheel, drive drunk, drive without their glasses on, get road rage, etc etc etc.
There is this sort of one-way lens when it comes to self driving cars. People want to throw up red flags about all things they might do wrong while ignoring the millions of stupid things that humans do to kill each other with cars every single day.
"It's ok if I get into an accident - it will be the other guy's fault" is only the right reasoning if you're talking on the individual level about about monetary costs of an accident only. If you're talking about injury, or if you're talking about the cost to society as a whole, they are bad consequences regardless of whose fault the accident is.
I think the actual answer is that self-driving cars will end up doing a good enough (i.e. at least human-level but not perfect) job of not wildly swerving or braking to avoid harmless objects like floating plastic bags that this won't be a concern.
> "It's ok if I get into an accident - it will be the other guy's fault"
Exactly. Most accidents take two people to happen, one who makes a mistake and at least one more who could have prevented the accident as well. For example, when right of way is ignored by someone in a left yield right situation, no accident happens if the one with right of way brakes in time. Or, if someone fails to merge in time and runs out of road, someone else can prevent an accident by braking a little.
This is a great anecdote that definitely needs a source to back it up.
Primarily, there are a significant number of single vehicle accidents caused by drivers jerking the wheel instead of acting in a calm manner.
Secondly, there are many cases where a box is not safe to ignore; that could mean it damaging a fog light, a large staple in it hitting a tire, or it getting stuck somewhere, temporary loss of traction or visibility.
In conclusion; anything on the road should be treated as something to avoid, but definitely something to avoid a high speed collision with.
Other Anecdotes to consider: The first model S firs was caused by a trailer hitch in the road. Hammer hit a model 3. Asphalt coming loose in slabs and hitting a driver. Mattresses and ice from the roof in front of you. Tldr; There are many accidents that do happen with human drivers.
I anecdotally question this based on both personal experience and stories I've heard. It seems like it would be a hard problem for both humans and AIs, however AIs have the edge in the long run due to sheer processing speed.
Also self-driving cars could have better sensors that don't have blind spots, and the multitasking ability to monitor all of them at once.
Autonomous vehicles will only ever truly exist upon infrastructure literally designed to aid them, greatly simplifying how they need to interact with the environment, thus making the problem tractable with code we can prove works. I really think it will take more than putting markings on existing roads. It is going to take new roads full stop, probably with various wireless checkpoints built into them.
After all, every driver on the road today is an incomprehensible black box where not only do we not know the parameters, we don't even know the function they're parameterizing. Every instance functions differently, and our testing procedures have woefully low coverage.
Not to mention that most software fixes cause other bugs...
We have precedent for how we qualify and evaluate things for safety: test them across a variety of conditions, accumulate driver-miles or operator-hours and incident frequencies. Then, using that data establish a bar for what constitutes an acceptable level of risk given the utility something provides. If we wanted to ensure nobody ever died in a car accident, we would ensure there were no cars, but collectively we've made a different choice.
Shutting down a plane is completely different from taking an entire class of publicly owned vehicles off the road. People will be furious.
Yes, they will be furious about the deaths and the shutdown, both. Don't forget that people are made up of individuals.
> entire classes of vehicles until the problem is confirmed fixed?
Yes, a malfunction AI would have to be grounded, just like for example the Boeing 737 MAX is now.
Yes of course we will. What is the problem with that approach? That is the exactly logical thing to do and will be done.
That's an extreme example, but automotive suicides that kill other passengers, drivers, or pedestrians fall into the same category. Consider also deaths from accidents involving drunk driving or fatigue -- thousands of motorists take to the roads every day modified in one manner or another that reduces their driving aptitude.
Also, while it may be correct to say that computers don't "fear death", there's no reason that "risk to self" can't be part of the criteria for decision making by an autonomous system.
Remember when the Toyota had that problem of the accelerator "getting stuck" because the software didn't disengage? Initially the owners' insurances were paying out, until it happened enough that they were able to prove it was Toyota's fault, and then Toyota had to pay them back.
I imagine in a self driving world it would work the same way. You get insurance, the car has a crash, your insurance and the manufacturer fight out whose fault it is.
The insurance will work much better than it does today, because insurance in it's core is about spreading the risks and calculating exact costs of those risks, it's about calculating statistics of negative events and predicting total costs of such events for the entire fleets.
ALL parts of that equation are just better calculated if all cars were automatic, - you can better calculate number of accidents, you can see details of all accidents because there is blackbox data including videos, you can compare cars to each other because a Tesla with same hardware drives in exactly the same way as another one (which cannot be said for human drivers), they don't have to calculate for weird human risk activities such as drinking or being tired, they can run simulations of the same situation on the same software etc. etc.
Insurance is not going to have any problems, insurance is going to love it and make a lot of money on the self-driving cars, they are a perfect fit for each other. Insurance companies don't even care for whom do they have to pay to, they just care that the statistic of the number of failures is correctly represented and that manufacturers don't lie about those statistics - that is all they care about, they calculate a simple equation, that's all insurance is about...
And if you go with the “nobody will own cars, you’ll just summon one” model... well the fleet owner will just sue the manufacturer instead.
For me? I'm a self-driving skeptic, but... if the manufacturer was willing to properly insure it, (I mean, a reasonable amount of insurance, at least a statistical life worth) I'd ride in the thing. I think that's an honest signal.
Its not just the manufacturers, who is underwriting all that insurance?
Ford sells approximately 2.3M vehicles per year, imagine if 50% of self-driving...over a 5 year period that 5.5M cars...if each one needs to carry a potential 1M policy thats an incredible amount of liability on someone's balance sheet. (even if you say the policy is only 100K thats still $576B in liability)
Thats only for Ford, add in all vehicles manufacturers and extend that to 10-15 years into the future and thats an incredible amount.
However there is nothing to say that a new laws won't be passed to allow manufacturers to escape liability. Most likely this is what will happen (see vaccine courts, etc)
You don't understand the complex weighted probabilities in your doctor's head either, but you trust them to diagnose cancer (which incidentally machine learning is beating humans at). None of the algorithms in doctors' heads can be formally proved to work in all circumstances, nor can the code that runs medical equipment.
A full understanding of complex systems (machine or human controlled) is not possible today in many domains, that's why we measure results. If the data shows that self-driving cars are safer, we will switch. At present, that's what it shows.
As to special roads/markers, these would make the technology less effective at dealing with the unexpected (crash ahead, moose on road, cyclist in the lane etc), and many of the leading companies don't think they are necessary. I can see cars forming networks which report danger, or adding more sensors, but don't think our roads will have to change for self driving, which will be prevalent within the decade IMO without infrastructure changes.
It is layers of redundancy. If one fails, the car continues to operate normally. If all are operating at peak, the car is near perfect. If multiple fail, it operates with somewhat degraded performance, but still markedly better than a human.
* Digital maps
* P2P Networks
* Human-reported obstructions and changes (Waze)
* Machine-focused traffic markings
How will it deal with an accident up ahead where some drunk bystander is trying to direct traffic? How will it know to ignore the drunk guy? What if it isn’t a drunk guy but a sober person directing traffic? Does the car obey in that case?
None of those are edge cases because every time it drives it will encounter some novel edge case that has never happened before and it will have to perform better than a human.
Don’t even get started with liability. Once you take away the steering wheel the manufacturer is on the hook for every single mistake and every single accident. You’d be insane to be a manufacturer and sign up for that.
Sorry, but self driving cars are a complete fantasy.
Most people wouldn't instantly know how to handle those cases either. Many people would obey the drunk guy. Maybe that's the right thing to do.
That's why these things could never be rule based, there are too many small exceptions. When they're not overfitting/able to memorize all your training examples, neural nets learn heuristics, just as people do. Different people learn different heuristics. Granted, they don't have much of the same context about the world that people do, it will take a long time to build enough examples for them to infer all of that. But Tesla's fleet is getting more driving experience every day than you will get in your entire life, and every time they train on one of those exceptions, the entire fleet will benefit.
Don’t even get started with liability. Once you take away the steering wheel the manufacturer is on the hook for every single mistake and every single accident. You’d be insane to be a manufacturer and sign up for that.
If drivers no longer carry their own insurance, this is probably going to be handled by insurance at the manufacturer level, and baked into the price. The insurance will demand certain processes to prevent large-scale bugs being rolled out.
I don't see any fantasy here, just a lot of work.
Humans can reason about things that haven't to them happened before. Today's machine learning systems cannot. As you say, to react appropriately they must have been trained to do so using human annotated data.
The argument against FSD is that you would need an infinite number of annotated examples, and an infinite number of subroutines for behaving in any identified situations, because the space of driving is effectively infinite.
Until machines are able to do general reasoning about things they've not experienced before then FSD is not happening. By the way, Demis Hassabis thinks that this sort of transfer learning is the key to solving AI.
That’s not how machine learning works. You can “train on exceptions” as much as you want, and you have 0 guaranteed results. It can help, it can make no change or it can cause unexpected regressions.
Media (and surprising amount of tech people as well) tend to claim that ML learning is like human learning - repeat something enough times and you’re done, you know how to do it. ML is no where close to that point.
Prove they are getting better. They run into the sides of trucks and off ramps on the freeway quite often (and don’t you dare blame the driver... it’s “full self driving” remember?)
Can you prove a machine learning algorithm does the right thing in novel cases it hasn’t encountered before? Nope.
And also, don’t reply with “well can humans”. That is a lame rebuttal. Computers will be held to an almost 100% non-failure standard before society accepts them. And that will never happen because of, well, reality.
I haven't done the math, but time spent on forest service roads, wedding parking lots, and boarding ferries is quite low overall. And the idea of driving in those instances doesn't bother me. Having the car take care of the other 99.999% of my driving live is what I do care about.
> None of those are edge cases because every time it drives it will encounter some novel edge case that has never happened before and it will have to perform better than a human.
If most of your driving involves drunks directing traffic, forest roads, and dirt-lot weddings then a self-driving car is likely not for you.
Which is all well and good until you realize a few things:
1) that “last 1%” happens every trip at any time. You will always encounter an edge case the machine cannot handle. Period.
2) as a result you have to always pay attention in order to immediately take over
3) you can’t because you (the royal you) are three sheets to the wind plastered drunk.
Sorry. If I have to pay attention for that 1%, it ain’t full self driving. And anything that encourages you not to pay attention 100% of the time is unsafe and shouldn’t be allowed in the road. And if I have to pay attention 100% of the time in order to take over, what the fuck is the point?
Take Japanese highway system. It is well maintained, has “rest” stations where you can park car for free.
If you could get an app that can drive drunk salaryman, or not even drunk just tired from Tokyo interchange to closest rest area to wherever.. you will win. Nobody will buy a car without that who uses car for highway driving, period. It is a killer app. Get off work at 9pm Friday, get the car to the IC punch the destination, wake up at a rest are 20mins from Ski resort, Onden, parents house etc
Snow, taiphoon coming whatever. Park at closest rest area.
I am very pessimistic about cars without steering wheels. I am quite optimistic about cars that have ability to drive well marked roads. Here is a crazy thing Charge fairs for highways like Japan does, then maintain them.
Yes there can be a big hole in the road anytime, the AI has to watch the vehicle in front of it, if no such vehicle then it should chose a speed that let's it evaluate road conditions for the given weather/visibility.
Vehicle coming into our lane? The AI has to match human level maneuvering to evade the incoming car. It already has much better chance given it won't panic, will be always fully alert, and will be as accurate and precise as it can.
So the big categories are sudden road/environment changes (tree falls on road, hail, mudslide, earthquake damages road, animal crosses road), other vehicles, and pedestrians/cyclists/etc.
All are manageable with inferences from the environment (weather and roadside context determine visibility and how much space there is for maneuvering, how likely are unexpected crossings - eg. deer, kids) and surrounding traffic.
Are these hard? Sure, but none require human level cognitive reasoning.
Are you are going to randomly have to park in a wedding lot? The only example you gave which might happen in the middle of a trip, the Tesla can handle just fine for long enough to pass control over to the driver.
> you can’t because you (the royal you) are three sheets to the wind plastered drunk.
It's still the driver's responsibility to drive sober. Even so, I'd far rather someone who is drunk be behind the wheel of a self driving car than otherwise.
Either you're driving, or you're a passenger in a car with a driver who seems reliable, but really isn't totally so. Eventually, that will become a winning bet, but when?
Driving safely on frozen surfaces is not a solved problem for human drivers, but most of us insist it is a perfectly reasonable thing to do.
If the vehicle drives as slowly as it should in those conditions, it would probably frustrate a lot of people who really depend on their false sense of invincibility.
A car has access to all 4 wheel sensors independently, it can apply brakes on each 4 wheels independently. It can always turn the steering wheel in the right direction, and it wouldn't panic.
Also, it would always drive the 'right' speed limit... sure, other human drivers might get annoyed, but assuming the true 'full self driving' future happens, there shouldn't be many of them on the road in time anyways.
What does that mean? That it actually is solved by humans? Or that it's not, and we just take the increased risk? If the former, then we can automated it. If the latter, then the self driving cars can also drive at increased risk, opt-in of course.
If we built planes that were only as safe as highway driving, people would be outraged.
We aren't exactly rational about this stuff, and we expect a lot more from machines we don't directly control.
But I like your opt in solution. Maybe the UI loudly complains about the risk of current conditions and sticks to <5 MPH, unless the user enables "never tell me the odds" mode.
It's possible, sure. But it only seems likely to me because I grew up in an age of rapid progress in information technology. History, though, has plenty of examples of technological plateaus and regressions. To people in the 1970s, it seemed obvious that by 2000 they could vacation on the moon. But the rapid progress of the space race quickly dwindled; the problems were harder than we thought and the rewards smaller.
The notion that we can make a computer as smart as a human is one of those things that seems like it will be just around the corner. But it seemed that way 50 years ago, too. E.g. HAL from 2001. It's perfectly possible that humans aren't able to make anything smarter than themselves. Judging by most of the software I use, we're barely able to make things much dumber than ourselves.
I feel like the correct way to describe this future isn't "self-driving cars", but rather "personal autonomous trains." In est, the road system described here would just be a rather clumsy railroad network.
I interpret the goal of having "self-driving cars" as referring to the ability to have a passenger vehicles that can autonomously navigate (wayfind?) off-road, i.e. what the aim of the DARPA Grand Challenge would eventually evolve into.
Think about attending a festival or fair with grass parking. You follow a line of cars, pull up to a guy who's standing out in the field. He looks around and says, "Why don't you go park next to that red Toyota two rows over?" Sure, that part is not "on the road," but certainly I had to take highways to get there.
Maybe I, as an urban-dwelling American, only need functionality like this a few times per year. But there are significant chunks of this country and the world in general where this is part of daily life. Adopting fully self-driving cars without manual driving modes is going to take extreme amounts of change and adaptation, not only technologically but also culturally. I would recommend spending a few weeks in the deep country if you want to fully understand some of the difficulties in reaching level 5.
If the time scale you're talking about is on the order of 50 years, I could maybe see it. But I do think there will always be a need for personal vehicles with some level of manual control.
Beyond all that, however, this article to me seems like 90% clickbait. The statement merely was "Maybe it will never happen," and it was stated in the context of a discussion of the difficulty in reaching level 5 autonomy. But now we have articles throwing headlines up saying "VW Exec admits fully self driving cars may NEVER happen." Feels a little disingenuous.
(I mean, we're still a long ways away from level 5 in the city.. I'm just saying, something that was level 5 only on pavement and only in the city would be damn useful; and good enough for more than half of us.)
We’ve done self-driving as a POC for a decade in Denmark, and it hasn’t really improved much, to the point where we too are considering, that it’s probably never going to work in the real world.
Don’t get me wrong. Self driving already works, it just doesn’t work on roads. Roads where you’ll suddenly have a bunch of leaves flying around. Roads where the paint job is cracking, perhaps even missing. Roads where the street signs are old and faded.
In ten years of testing with some of the best in the business, we’ve had maybe two days worth of self-driving.
Who is "we"?
new technologies are over-estimated in the short term, and under-estimated in the long term.
Decades ago our computers were "soon" to be voice controlled, listening to our speech and doing our bidding. That was a big load of hype. However, over time and below the radar it became true as computers first answered phones, then took limited commands in cars and smartphones and now it is basically true (without all the hype).
Also, I wonder if these kinds of comments risk becoming
"I think there is a world market for about five computers."
"640K of memory should be enough for anybody."
[What I actually said was 'I dictated this entire comment in a quiet office <period> The word error rate isn't too bad <period>'. Built-in recognizer on a Mac.]
In fact, the new generations of Volkswagen Golf, Škoda Octavia, Seat Leon, and Audi A3 (released between October 2019 and March 2020) will already be able to communicate with each other and with the traffic authority in real-time to prevent road accidents.
Volkswagen Golf, Škoda Octavia, and Seat Leon are #1 best-selling car models in multiple European markets, and Audi A3 is one of the most popular and affordable premium cars in Europe.
I'm presonally bullish on fully autonomous transport because of the industry interest shown in it and the potential demand for it. The latter I don't see going away unless something comes around that makes it redundant.
If it takes 50 years to have a fully ready environment with beacons, networking etc., will we still be riding “cars” on “streets” ? I ‘d guess when we reach that point we’ll also have solved the “getting from A to B” in completely different ways.
We have already done this, they are called railways. Some of them evening have self driving trains.
To get self driving cars on roads you need a human level AI. Trying to get away with less intelligence by restricting the environment will never get to the point where it will be safe to have automated vehicles - you can’t provide infrastructure clues to the vehicle to help it tell when a child is about to run out into the road for example (and there are many other examples too) so you would have to segregate them physically from everything else and then you might as well stick to railways
When I drive and there's a child walking on the sidewalk, I'm not analyzing the chances the child will jump onto the road. I'm just assuming the parents have succeeded at explaining how to not kill yourself, without that assumption I would go crazy with too much things to worry about. AI does not get crazy, nor even tired, with too much tasks. It might usually come up with the same result that I always apply - just drive slowly through roads with children on sidewalks. But it might do something I won't do - slow down even more if the child seems to be doing something suspect.
One of the problems with Google-designed self-driving is that it goes slow and refuses to go if uncertain. I imagine that's why those cute little cars without driving wheel were axed. Even if the system has driven a bajillion miles without inflicting any risky situation, it won't sell if it can't guarantee doing your commute in your usual time. But it can't assume the same risk you take every day, that if that child behaves extra stupid and law misfires, then you might find yourself traumatized and without a driving license. Because when AI loses that license, it's killing the whole business. So, the bar for safety must be higher, slowing you down to a fully-controllable crawl, or constraining to high-safety situations. Hence, you get all those highway-assist features all over the place.
Disclaimer: I work in Google, but have no insider information about Waymo. Just remember these marketing materials back from the Chaffeur days and still think that was the way to go.
Then you give users of the service a choice: maps where service was very reliable, where some accidents happened or where not enough data was gathered i.e. there can be problems. Let users decide where they want to go and what level of risk they are willing to take.
Maybe we can have a device (smartphone?) that serves as marker for vehicles and helps them identify people/dogs/properties around them. If you ask me this can be done even now.
Now, drivers take these risks because they subconsciously learned that the probability of accident is vanishingly small, while probability of being honked at for going anything below the limit is rather large. Thus, they subconsciously balanced the expected reward/punishment to lean to the direction of taking the risk. But AI is always aware of the risk and very well able to calculate it. Now imagine the headlines "Google's AI intentionally kills X children per year" and the regulatory reaction.
Having cars move slowly but steadily does not make trips longer on average, at least in areas prone to traffic jams. Even though human drivers can find it frustrating, it's actually the opposite. If vehicles are consistently slow enough, you can even get rid of traffic lights and stop signs.
- researching it may not be feasible
* because it becomes a waste of engineering resources to continue the research
* because almost self-driving cars may be good enough
* political resistance or liability laws prevent mass rollout
* the cost of building and maintaining required infrastructure is too high
* not enough car purchasers may want it
I think when someone says 'will never happen' they mean 'not in the foreseeable future'.
Obviously the environment, usage and science can change such that full self driving could happen. I mean in the 1600s nobody could imagine a Boeing 747 flying loaded with hundreds of people, sure.
But the hype around self driving (by all the dreamers) has been that it's more or less 'right around the corner' not in 30 or 40 years or even 20 years.
How resilient would such a system be.
How easily can it be attacked to fiddle with the ability of the car to navigate safely?
Also, and much longer term, isnt it kind of weird to think that human infrastructure can only be seen to massively scale i. The planet so long as we turn the planet into a cyborg?
Isnt it weird and wasteful to build a planet wide cage of tech infrastructure for the evonomies of the world to survive and for human civilization to operate along the trajectory we are currently pointing?
Seems dismal to me.
Or even easier, platooning. I don't understand why autonomous cars is a bigger thing than platooning. I mean, platooning solves 95% of use cases of self driving cars  and is orders of magnitude easier problem to solve.
 At least for me. I do not mind driving in the cities myself, but if I just could nap or watch a movie on the highway part, that would have some utility
 I assume also some regulatory developments for this are missing in addition to platform and universal tech kit for private cars. In case such platform already exists and all regulatory hurdles have been tackled, then I miss only widespread adoption and marketing...
I mean, you could do a retrofit kit for regular cars, too... but that seems harder to get exactly right (I mean, considering the cost of a fuckup) and would require a bunch of new marketing infrastructure, whereas if Ford, say, just bought peloton and said "hey, make this work across our model lines" - well, that'd be a pretty good argument for buying a ford.
Multiple important stake holders seem to have significant incentives to make this happen. Local/state govs want less traffic jams and crashes, auto insurance companies would love to collect premiums and not pay out, Uber, Lyft would love to get rid of their drivers
The ratio of pilot to total people on the plane is often something like 1/100. In a cars it might average 1/1.2 or something. I think that should be improved.
Also, on a plane, those pilots are never expected to jump in with 2 seconds notice because something isn't working right. I don't think that is realistic....people zone out, but that is expected in "supervised self driving".
Never means never, not even in 145 million years. But he probably meant not in our lifetimes and with current urban planning. When all cars are self-driving it will probably be better
My guess is that interstate type highways will be instrumented for trucks and cars will benefit.
So this means that self-driving cars will have to safely handle these cases and that also means that it is likely that cars will have to still be able to be driven manually.
Bottom line: self-driving cars will have to handle absence of smart infrastructure (in which case do we really need that infrastructure? I think we'll still need it, though, to guide and improve traffic)) and/or cars will continue to be driven manually at least some of the time.
A city-only car would be totally useful for people who live in cities; you just rent something when you want to go to the boonies.
Heck, most BEVs are that way now; I've got like 120 miles of range on mine, which is fine almost all the time. the two or three times a year I need something with more range or with more cargo capacity or what have you, I borrow or rent.
I think that the economics of the first level5 cars might be similar to current BEVs, in that you can only go where there is infrastructure. which is where most of us go most of the time.
But they still need to handle fault cases: e.g. they cannot only rely on a beacon sent by traffic lights because that beacon or the whole traffic light might out of order. So, imho, while smart infrastructure may help self-driving cars and traffic management it does not allow you to avoid the "hard work", which to make sure self-driving cars will behave safely and reasonably completely on their own.
Note, you still have the safety problems to solve. You need to know how to get out of the road if the road infrastructure is on the blink. I'm just saying that "I don't know what to do so I will pull over" as long as it doesn't happen too often, is an acceptable answer.
Also, a vast majority of the world doesn't even have proper roads like many western countries do.
In many parts of the world a road is not even paved or asphalted, and let's not forget what is actually making use of that road.
Seeing animals on roads might be a rare sight in a western country, but in much of the world, the road is shared by more then just passenger cars and trucks.
Take an electrified rail network, pave it over to resemble a tramway track and add on/off ramps wherever they might be useful. Figure out an economic mechanical design that would allow a computer to precision-drive along the rail for on the fly mode switching. Which is an extremely limited task scope where computers would excel, very much unlike the almost-AGI requirements of full self driving. Mandate strong requirements for access to that network including a small minimum range of battery-autonomous operation so that you don't have to reach the atrociously high number of availability nines a conventional rail network needs to avoid total schedule collapse.
A lot of this is because railway signalling operates on a brick-wall principal: it is constantly assumed the vehicle in front could come to a dead halt instantaneously, whereas most road situations assume if you can match the braking performance of the vehicle in front with some margin.
The railway case is safe for the trailing vehicle in the situation there's a concrete block on the track ahead, the road one is not.
That applies to old style -though still in common use- signalling. CBTC (every railway should use, but very few uses it) has similar, or even tighter, margins to road signalling.
I'm also unaware of any freight or mixed traffic application of CBTC, which makes it a stretch to say every railway should, though plenty of proven in-cab systems provide many of the same benefits (and you can decrease block-length substantially to get much of the way there).
Drivers feel safe to attempt brake matching (they fail often enough) because road code assumes that you never go fast enough to make stopping distance exceed visual range. Even if that rule is routinely broken the brake-match distance stays comfortably within visual range (stoplight waves travel upstream). In rail, everything happening within visual range is basically too late to even bother and this is entirely a consequence of braking performance.
Not really. EMU passenger trains, such as subway trains, do have good acceleration times--good enough that it's limited by passenger comfort, not by physical hardware. This limit is about 1 m/s², with emergency brake conditions reaching 3 m/s² (note that the latter does imply several passengers are going to be nursing injuries--there's no seat belts after all). That's roughly comparable to typical passenger vehicles.
Freight trains have much longer braking distances, but that's a factor of 10,000 tons moving at 50mph has an insane momentum combined with relatively few axles being able to contribute to stopping force.
The main reason you need large distances between trains: switches. To control where a train goes requires moving a physical piece of infrastructure at the switch. You can't move the switch until the previous train clears it, and you don't want to let the subsequent train reserve a path over it until it switches into a new position--if the switch gets stuck in the middle, the train derails instead (or worse). The "brick-wall" principal follows from this situation.
Just like horses. Am sure there was a time no one ever imagined cars would completely replace horses on roads.
Here's a grim reality: at ~3 million per death in a car crash (that was a reasonable estimate of insurance cost, overall, a decade ago or so), with ~37k deaths/year in motor vehicule accidents in the USA for instance, that's roughly $100 billion / year — a mere 0.5% of its $20 trillion GDP. So I'm not holding my breath for public or private action at a massive scale (think that fracking alone was orders of magnitude more profitable for the US, and came with a strong geopolitical advantage to boot with).
Do the math for your country, $3M/death over GDP, it's usually negligible compared to "the big thing" that your local politicians and corporations keep talking about.
Even in Western Europe, where
- regulation is people's #1 method for solving everything and anything,
- "the value of life" is emphasized every other speech and publication and actual social security systems, free medical care, free education even, etc. (a few hundred bucks away from actual UBI, for real),
- companies could actually compete (Europe has 0 tech giant, but several big car manufacturers),
you don't hear a lot of political or popular or private (business) support for Level 4 infrastructure (L4: roads dedicated to self-driving cars, likely to kill ~1000x less than human-driven roads, not to mention the economic gain of time while commuting and travelling by road, which whether work or leisure is a net psychological gain).
Actually L4 is not even a "topic" in many such countries (let alone L5), it's a curiosity, a funny segment to wrap up the news. Even though L4 is totally doable NOW. What you actually hear is much fear about tech — as usual. That's about it for self-driving cars.
I have no idea why, it makes no sense to me, but even if rich cosy comfortable life-adoring 35h/week western Europe doesn't want it bad, I don't know who does/will, in the short/medium term.
The above "grim reality" is just my way of fishing for answers, really. I don't know. I'm just skeptical that self-driving cars are a thing that people or leaders (public and private) actually want. I hear much, much resistance to the idea and very little interest for the upsides from the mainstream. I sees smiles and eyes rolling, and 10 years later there is still no decent infrastructure to charge EVs except Tesla's — a foreign entity, by far the biggest promoter of it all, but can they do it? Can they reach L5 or politically negociate L4? Back to the above concerns, or absence thereof really, of the mainstream.
It's like space, basically: it would be incredibly little of the world's GDP to put massively more effort and shorten industrial-scale space activities dramatically — like if it's 30 years away at current rate, we could make it by 2030 really easily, without pushing it far (nothing like a war effort for instance). And the benefits are so immense it's basically stupid to argue against, the question is how to do it best. And yet it's still anecdotal in most countries budget, it's mostly just PR. Even as we speak, a "prime time" for space as a topic of (positive) interest for the mainstream. Go figure.
Self-driving cars, it seems, are met with even more political and social resistance than they are made impossible by idealistic goals, because the former is a current showstopper whereas our current technological capacities are not.
- ridiculous before the fact
- dangerous as disruption becomes real
- obvious in hindsight
I.e. a "paradigm shift". These things take time, from inception to maturity for adoption, regardless of tech. Usually about a generation: that customers and voters be mostly people born with the idea as an "almost reality" (after PoC, before mass adoption), that's what it usually takes to raise the S-curve.
Cars themselves weren't accepted or desired by most people years after their appearance, it took time to change minds.
But in some cases, it was much faster, like the web or mobile phones. I just hoped this would be a case of that.
(meta: I think it's totally OK to disagree, upvoting you for discussion as a shield against downvoters based on opinion)
As a rough rule of thumb, I'll call a car fully self-driving if, without any changes having been made to the road system to accomodate self-driving cars, I feel comfortable getting in the back seat, telling it to drive me to a certain house in a city a few hundred miles away, and falling asleep.
And sing kumbaya, holding hands together....
What about adversarial car AIs? Malicious actors, etc.?
I reckon motorways could be handled easily enough, and basic dual carriageways and normal intersections, but once you start mixing up multiple modes in inner cities, some tough decisions need to be made.
So it's like the IPv6 problem. If we could all coordinate at once, it would be easy-peasy. In reality, it's virtually impossible.
Edit: a commenter pointed out that pedestrians are also a major problem; even in a far-fetched imaginary scenario I don't know how you would remove those from the equation.