Hacker News new | past | comments | ask | show | jobs | submit login

> So far, nobody seems to have full self driving without "safety drivers".

A common saying around here is that we have two seasons: winter and (road) construction.

Construction zones have pretty much every obstacle to automated driving you can think of:

* painted lanes that don't correlate to the temporary lanes marked by cones

* lanes that don't correspond to pre-programmed maps / gps

* irregular and unpredictable vehicle and pedestrian entrances and exits (construction workers and trucks)

* Areas where traffic is reduced to a single lane for both directions, and must take turns coordinated by humans with signs at each end of the lane

* speed limits marked by temporary signs

* rough, temporary transitions between pavement and gravel

Unless we can somehow get every state to compel every road construction company and every autonomous vehicle maker to use a single communication protocol, implement it at every construction site (so autonomous cars are made aware of these dangers) it's not going to happen.

Oh, and said protocol has to be hack-proof so trouble-makers can't start convincing cars that they're in the middle of a construction zone and force them out of their lanes on normal roads.

It's conceivable that the coordinated effort could happen, but I'm not going to hold my breath (due to the sheer increase in cost to the government) nor will I trust that said protocol will have fail-proof security.




>Oh, and said protocol has to be hack-proof so trouble-makers can't start convincing cars that they're in the middle of a construction zone and force them out of their lanes on normal roads.

Why would it be easier for trouble-makers to fool autonomous cars? As a human driver, I'd be fooled by pretty much any road marking or guy in an orange vest.


Correct.

It’s amazing, how much forgiving we can be for human errors ( accidents every year) but absolutely not for machine/autonomous vehicles, even when, statistically speaking, machines may make better decision much faster(or at-least no worse than human judgement)... I guess feeling/perception of being in control is more important to us...

Other interesting observation I find in every autonomous vehicle discussion is, how we only focus on edge cases... when in reality every tool that we use (including the car we been driving) today are built for general use case and operate under mostly a control environment.

Rather if we think autonomous car as additional pair of eyes and hands when we need it most might serve us well in short run before the technology gets mature over next decade or two.

I’ll be really happy and relaxed if my car can mostly (70-80%) drive it self to daily commute or next trip to LA; expecting it to be my chauffeur is bit too much, personally.


This is why I wonder why platooning technology is so much less hyped. Give me a platooning hardware kit to my current car, enough users after which I can join on my longer trips and literally more than 95% of my self driving car needs are covered. I really do not care if I need to drive a 10 minute stint on the city every now and then. And If I do, I can take a taxi. But getting my hands off the wheel and eyes off the road on the highway is what would have real utility to me.


Platooning kind of messes up highway traffic because people need to either cross a ton of lanes to get in and out of the platoon (if the platoon is on the left), or non-platoons can't get on/off (if the platoon is on the right). If everything was forced to platoon on highways it would work. But that's like 30-50 yrs later, after the tech is introduced, barring some really radical legislation with huge popular and state support.


Not sure if it would be techically possible to have gaps in the platoon after every five cars, but at least it would be trivial to set max size of one platoon to something reasonable.


Sure, yea, I don't mean like it becomes literally impossible to merge in. Just that for any reasonable size platoon, it disrupts more traffic than it saves. Given existing roads and the existence of not-platooned cars on the same roads, it doesn't really work.

Maybe it works for long haul trucking though.


You honestly trust some random driver at the front to make your decisions?


Well, that sounds pretty much like flying commercial flights or traveling by bus. So I guess based on my travel history, answer must be yes.


When I fly, it's a trained pilot. Even for driving is stricter licensing for cdl.


And you think there would be no extra qualifications required for the platoon heads?


  amazing, how much forgiving we can be for human errors ... but absolutely not for machine
A given human fail produces one event. A flaw in autonomous driving software can mean thousands of failure events.

Also, when a human driver's negligence results in injury or severe damage, criminal charges result. That's a deterrent. With autonomous driving, you can't prosecute an algorithm.


Yes. point about the charges would be a thing, something that should be debated...

would "use at your own risk" vindicate the company behind autonomous vehicle? or owner is responsible for his vehicle's actions? i guess never in the history, we had so much advance automations in direct hands of consumer...

As for the failure, I have reasons to disagree... if autonomous cars are working under "unsupervised learnings", my assumptions is, it most likely will makes different decisions for same scenarios based on data on hand.... so thousand's of failure events... though it may look similar may or may not end in same results... similar to how we would react when faced to some unknown situation on road... your scenario might more likely to play out for bad batch of hardware devices/sensors/lidar/camera etc in autonomous system...


>or owner is responsible for his vehicle's actions?

If it's sold as fully autonomous, i.e. significantly beyond Tesla's system today, I don't see how the manufacturer could not have the liability. How comfortable would you be to use a car that could expose you to severe criminal liability because some company made a mistake with their software?


liability is assumed. I speak to criminal prosecution like an impaired human driver would face in addition to financial liability. The automated vehicle would face no criminal exposure.

The company responsible would also have a clear incentive to alter/destroy any damning evidence gathered in telemetry.


>The company responsible would also have a clear incentive to alter/destroy any damning evidence gathered in telemetry.

Not saying it doesn't happen. But now you've gone from a product liability case which rarely has individual criminal consequences to actions that clearly do.

If/when we get to this point, it will be "interesting" though. Outside of maybe the medical area, there aren't many examples of consumer-facing products that, when used as directed, kill people because sometimes "stuff happens." And people generally understand that's just the way it is.

It's not out of the realm of possibility to imagine government-approved autonomous driving systems that insulate everyone involved from liability so long as they're used and maintained as directed. See e.g. National Vaccine Injury Compensation Program. I'm not sure it's likely but it might become a possibility if manufacturers find they're too exposed.


> I’ll be really happy and relaxed if my car can mostly (70-80%) drive it self to daily commute or next trip to LA;

There's a caveat here that this 70-80% must be contiguous and the car must be superhuman-level reliable in that segment. Otherwise, the "additional pair of eyes and hands" significantly increase the danger. If your car suddenly decides that it can't handle something and asks you to take over in the last second, you won't be able to handle it either.


My assumption is that you get to this for some subset of highways in some subset of weather conditions with some special rules in place (maybe mandatory maintenance schedules?).

Which is actually a big win as long highways drives are boring and probably have a decent chunk of more serious accidents.

It doesn't give you the robo-taxi use cases that are what a lot of urbanites care about the most. But it would be a nice safety and comfort add-on for how a lot of people spend many hours of their weeks.


The problem is that you encounter edge cases on every drive, and you need to be ready to respond. The car may be able to handle 80% of the trip, but one of those edge cases will sneak up on you and the car. How long would it take you to regain situational awareness and safely maneuver the car in the event of some unexpected situation after you've been cruising in self-driving mode for an hour? Ten seconds? Five? Can you do it in one? What if the car doesn't realize it can't handle the situation at all?

Like any risk, you also need to consider the impact of getting it wrong. If an audio assistant gives you the wrong answer to the population of your hometown, no big deal. But if your car thinks everything is okay and drives you into a stationary fire truck on the shoulder of a freeway when you are travelling at 70mph, the downside of that edge case is infinitely worse.

Sure, humans can make these mistakes, too. But the fact is that your notional world where computers are able to make smarter decisions than humans about how to drive doesn't actually exist. No one has figured out how to make it work. And they won't anytime soon. They've solved all the easy parts. But it turns out there's a lot more involved in driving than all the billions of dollars poured into the problem so far can figure out.


https://www.youtube.com/watch?v=9SexsvIO4vE internet is full of these examples.

My point is, computer with,

- more data, (historic on how to act on certain situation, live data for event i.e. sensor data, lidar/radar data, images) vs human driver who would not have access or the ability to process these.

- faster and parallel processing vs human driver

- single focus/goal (of driving from x to y safely and making appropriate decisions to achieve it) vs human driver (with "physical limitations", "emotions", "hormones" and other things that makes up "life") is more likely to be distracted...

computer with all of above advantages compared to human driver may able to make better informed decision much faster than human driver can do (and when it doesn't it's hard to know/prove if human driver would consistently make better decision every time for same situation)

having said above, I agree that tech is in its infancy and it's gonna take a decade or two to be matured and even after that human intervention just in time in some cases would be needed but for the most controlled/learned environment (which is 70-80% of total driving on day to day basis) these systems would be immensely helpful.


If someone did this to a human driver, they may make a mistake and potentially get injured, but pretty quickly someone would notice and do something about it. With automation, a cleverly crafted issue could persist for some time causing quite a bit of damage before it's corrected.

https://www.reddit.com/r/gifs/comments/6ofa63/dont_turn_your...


You might be fooled by a man in an orange vest, but I doubt you'll listen to him if he's telling you to drive into oncoming traffic.


If you don't immediately see the incoming traffic in question, yes you will. Otherwise your self preservation rules will prevail and you won't budge.

Note that self driving vehicles aren't different from humans in that respect, except they see much farther.


With self driving cars you have no idea what they would do when presented with an edge case they may have only seen very rarely before. With humans, you can in most cases assume the driver would do something reasonable, especially if there is enough time to think through the situation.


You must be driving in much better places than I do. Human drivers do incredibly unreasonable things even without edge cases.


But they understand less and have less common sense by which to operate.

So, it takes a lot more work on the programming side to compensate.


> Why would it be easier for trouble-makers to fool autonomous cars? As a human driver, I'd be fooled by pretty much any road marking or guy in an orange vest.

Imagine someone hacking the 'construction zone protocol' and spoofing thousands of cars into thinking they're in a construction zone at once. You'd be hard pressed to fool thousands of geographically separated human drivers at the same time.


> pretty much any road marking or guy in an orange vest

That only works if a police car doesn't come by and catch the perpetrator in the act.

With a wireless communication to automated drivers, someone could plausibly feed bad information from a hidden or otherwise remote location.

Beyond that, just as automation allows human-intensive processes to scale by removing the humans, fooling automated drivers can scale much more readily than fooling human drivers.


Apparently, people mess with autonomous cars simply because they're autonomous. It's hard to say whether that's just the novelty factor or something likely to persist.


I can easily imagine some bored teenagers (can even imagine a certain version of younger me doing it) blocking an empty autonomous vehicle from leaving a parking space just for kicks. I suppose coaxing other empty ones into a ditch or waterway isn't too much of a stretch either if the cars are owned by some mega-corp and become some kind of cheap public good like shopping trolleys.

As soon as it becomes a robot, a lot of the social pressure to be a good person falls away. Less so if there are people inside, but I can see empty autonomous cars being given a pretty hard time just for kicks.


Jam rf signals, modify ir cues, etc...


Remote operators solve these issue.

Once you have autonomous cars that driving safely, but can't manage complex situations like you describe, you delegate those for remote pilots that are allowed to operate car in slow speeds. You need 5G network coverage with mission-critical features (mcMTC) to archieve that. BLER 10^-6 and E2E latency < 5- 10 ms. Construction work crews might be required to erect 5G mini cell tower before they can start working to make sure that traffic goes smoothly.

Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.


> You need 5G network coverage with mission-critical features (mcMTC) to archieve that. BLER 10^-6 and E2E latency < 5- 10 ms.

I do wonder if that's a factor behind Musk's push into low-orbit satellite Internet.

> Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.

Even if all mileage was human-driven there would be very large benefits if you could really consolidate taxi drivers in call-centres for remote driving. No need to transport or preposition drivers and much less trouble estimating demand.


Starlink can provide 25 to 35 ms latency from low orbits, so I don't think so.


Even 70ms isn't all that long compared to the the official (let alone real-world) thinking-time estimates for braking http://www.brake.org.uk/component/tags/tag/thinking-time , so it might not be an unacceptable delay, at least if combined with professional drivers and reduced speed limits and/or failsafe locally-controlled AI braking.


  fleet of 10,000 vehicles might need only 100-200 remote operators
It won't scale in a predictable way. Let's say there's a major event in NYC (natural disaster or unnatural). You may suddenly need 700 operators at the same time just to deal with NYC and environs.


Let’s say there is an event in NYC. Within milliseconds of the event, all the robot cars can be notified and their human drivers can take over, only the cars immediately in the vicinity of the event need robot operators. It’s not straightforward, but it can be done.


Thus requiring the car to always have a licensed driver in the driver's seat. Basically, exactly what we have now with the safety drivers.


Even if you assume requiring human takeover to be a relatively uncommon event (whatever that means), as soon as you posit it as something that will be needed from time to time, you've significantly constrained the car's usage models. You now must have a licensed, unimpaired driver in the car at all times. Even if they don't have to be paying attention, this means no empty cars, no unaccompanied children, no "driving" home from the night out, etc.


Sure, but maybe we can start there. I would certainly buy a car that could drive itself a significant percentage of the time


Oh, I would too assuming it were relatively affordable. I'd be pretty happy with one that even just let me doze off when highway driving in a limited set of weather conditions.

I was just pointing out that, if you can't guarantee you won't need to handoff to a physically present driver, then there are a lot of things you can't do with the car even if needed interventions are just an occasional thing.


Yeah... Used to work for a small start-up that had a product to basically automate a switchboard used for elderly care. It was fun when the need for manual operation suddenly came around. Didn't have the manpower, nor the actual switchboard.


You reduce the number of vehicles available in those rare cases.


If it’s possible for remote drivers to assist a vehicle with no in-car steering apparatus through areas where the algorithm can’t go, then I think it can be done.

Getting to absolute 100% will require either AGI or an incredible infrastructure investment. Now personally I think FSD is worth on the order of $1 trillion per year to the economy, so it’s the next great Moon Shot, and totally worth every bit of infrastructure investment we can throw at it.

But it makes sense to see how much further we can get with in-car algorithmic driving before the infra investments start coming in earnest to fill in the gaps.

Another possibility is there could be ways for a passenger to assist the algorithm without actually using a steering wheel and pedals as input.

I believe the level before truly perfect FSD allows the car to get stuck as long as it does so safely. Approaching and stopping at a single lane construction zone, for instance.

The current Tesla AP does remarkably well on highways with missing lane markings. A stretch I drive every day is ground down in prep for new pavement and just has the occasional white square marking, but it’s enough for AP to lock in on. It also seems to do fine with cones.

It’s worth noting that construction zones aren’t even particularly safe for human drivers (accident rate skyrockets). So technology to make construction zones more passable overall is important, even if it just enables self driving as a side effect.


It's hard to see how we could really count on remote drivers for anything safety critical considering that the current cellular data network is unreliable, lacks guaranteed quality of service, and has many coverage gaps.


Keep in mind that the vehicle already has all the sensors to do collision avoidance and navigation.

The remote control can be to tell the car "drive this path" instead of direct control of the vehicle over a high latency link.


https://boingboing.net/2019/07/06/flickering-car-ghosts.html

Essentially a POC that the car can see, but is so brief that the human cannot see anything.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: