Hacker News new | past | comments | ask | show | jobs | submit login
Waymo got pulled over. What happens next in Phoenix? (azcentral.com)
26 points by intunderflow 4 days ago | hide | past | favorite | 49 comments





"unable to issue citation to computer" is about the stupidest and laziest police response I have ever heard. I am beyond upset about this. It's obvious they could issue a citation and send it to the address on the registration of the car, which is certainly Waymo or a company that could forward it to Waymo.

If my parked car gets a ticket, they put it on the windshield. It's not some stupid response like "unable to issue citation to car with no one in it." Since the window was open, they could also tape it on the dash or put it on the driver's seat.

There is no reason why Waymo should not get a citation and be forced to appear in court to explain themselves, just like everyone else. That is why the courts exist. But lazy cops seem to prevent this from happening.


Most jurisdictions make a distinction between moving violations, which are issued to the driver, and parking violations, which are issued to the car (and its owner, by extension). This is why, in most places, you cannot get points on your license from parking illegally.

This situation would have been a moving violation. It sounds like the law has not caught up with the concept that a company might hold a driver's license and be issued moving violations.


I can see next generation of vanity license plate hacks:

NODRIVER

NOHUMAN


Just don't get the plate NULL, apparently that gets you a lot of trouble.

https://www.wired.com/story/null-license-plate-landed-one-ha...


The problem is software that drives an entire vehicle line is accountable. What you describe would hold an owner accountable. Like blaming a victim of a Toyota for a bug in the cruise control. But not the manufacturer of the software. It's an emerging thing and grey area for existing laws.

This seems like a strawman that isn't related to the individual case here. For Waymo the cars and the software are the same company. Even if they weren't, the car's owner can always blame the manufacturer of the car or any software running in it. That's why we have judges to figure out who's at fault.

Think about the case of stuck accelerators: the driver of the car could be blamed for hitting something, but they say in court that they did not have their foot on the gas. This gets checked out and pushed onto the manufacturer of the car. This is old hat and has a lot of legal precedent. This is why we have recalls for this kind of issue.

While the legal precedent may be a current grey area that doesn't mean it's impossible to give a citation to a driverless car. That is the beginning of getting the grey area into court where a judge can rule on things. Not giving a citation does nothing for anyone other than let Waymo get away from responsibility.

Not only that, but if Waymo got a citation because of the openness of our court system everyone could see how it turns out and what arguments the company is making.


Capital wielding technology is one of the few trump cards to cops' customary demands of authorituh. Can't get very far trying to beat a metal car with their baton, so they're at a loss.

Cops beating self driving car with batons will be the new Officespace printer scene. Also the cops should let all the air out of the tires and boot the car for good measure.

More like regular citizens when the self driving bubble kicks off in earnest and aimless investment buys as many cars as it did scooters, continually circling rather than merely blocking sidewalks.

I'm fairly skeptical on driverless cars but it's hard to say what the deal is here - I see plenty of humans end up in the wrong lane around construction zones, too. Obviously we want these things to be better than humans, but it'd be nice to know if this was just the sort of mistake we see humans commonly make here, or if it was egregious.

Dashcam footage of what the car actually did would be pretty helpful in understanding what actually happened.


>but it's hard to say what the deal is here

An autonomous vehicle drove into oncoming traffic. That's not 'failing safe'.


I didn't imply that it was, so I'm honestly unsure what your point is. I already stated I'm skeptical of these things, but even as a skeptic I don't believe it's reasonable to expect perfection out of them, especially early in development. That's why we do these things in a limited fashion to begin with.

Humans get confused by construction signage all the time. I'm curious if this was a mistake that would be understandable if a human made it, or if it was an obviously incorrect choice. "Can make the same mistake a human would" isn't ideal, but it's better than "Makes mistakes a human wouldn't."


Unless I missed something, there's no clarity in the article about whether or not Waymo eventually got issued a citation. But I also couldn't watch the full video without being bombarded by ads and audio that started running headless in the background.

No, it didn't get a citation.

When humans break traffic laws, it's often because they decide to break the law or do not take appropriate care to ensure that the law is followed. As a result, the process focuses on personal responsibility and accountability for the driver's actions. That framework doesn't map onto self-driving vehicles; we probably shouldn't be trying to force it to fit. Treating a self-driving vehicle that violates traffic laws as a defective product is a better way to frame the issue.

I'd compare it to a situation where a car runs a red light: it could be because the driver wasn't paying attention, or it could have been because the brakes failed and couldn't stop. The breaks could have failed because they weren't properly maintained or because of a flaw in the design. Who is responsible and what appropriate accountability looks like depends on the details. If the driver wasn't paying attention, they should get a citation for violating traffic laws. The operator/owner (the registered owner) should get a citation if the breaks aren't reasonably maintained. If there is a design flaw, a regulatory agency should address the issue with the manufacturer to ensure that the problem is corrected. If the manufacturer was reckless with its design or manufacturing practices, it and/or its employees should be held accountable for those choices.


Reader mode for the win. The article mentions that the traffic cop was unable to cite the robot.

So basically if you made a mistake while driving, quickly switch to the passenger seat and let your computer-assisted imaginary friend not get a citation or fine? Good to know!

I wouldn't be so quick to do that. A potentially defective self-driving vehicle is a more serious matter since it could affect many cars and will likely attract much more scrutiny and further investigation… which, in this scenario, would quickly reveal that the vehicle was under manual control at the time of the incident. Now, you have a traffic ticket and a criminal obstruction of justice charge.

Sure, as long you happen to be driving your $200,000 Jaguar loaded with sensors that day.

"UNABLE TO ISSUE CITATION TO COMPUTER" is going to be one of my go-to phrases henceforth.

Tickets are supposed to be a deterrent to violating traffic laws. For a corporation though, even a big ticket is just a drop in the bucket. Laws will need to be updated if these things are allowed to be on the streets.

One of the ways that tickets are a deterrent is if you get enough of them they revoke your driver's license. Seems to me that it would be a pretty big deal for the corporation if its "driver" could no longer legally drive any vehicle!

The fundamental problem of AI cars--they have a set of rules rather than common sense. They will fare far worse than humans when given impossible directions.

"Common sense" is a set of rules. I see a lot of drivers doing stupid things and I will confess that I am one of them sometimes so it's not like 'common sense" is some kind of magical human ability that we are born with. It's set of rules that we automatically acquire when driving long enough (or not if we are not observant enough) and I believe AI cars can learn that as well.

I feel often times that common sense violates rules. For instance, driving over a double yellow to give more space to something inside of the road. Maybe it is a set of rules some of which exist outside of common law.

"Driving over a double yellow to give more space to something inside of the road when there is no other car approaching from opposite direction and visibility is good" is still a rule that the AI-powered car can use.

Exactly. The car sees the rule "don't cross the double yellow". The human sees the more general "double yellow means there is insufficient visibility for a pass". You must not put yourself in a position of having to be across it, but that doesn't mean you can't be nice to a bike in the bike lane if visibility and traffic permit.

We can add the exception for the car "if there is no another car approaching and you have to overtake a bike then you can go over double yellow but you need to reduce speed" - computers are fast enough to analyze thousands of rules and exceptions every second and choose the best option.

Yeah, but you can’t manually encode every rule of common sense — especially if the rule is more analog than discrete.

Common sense is having a true understanding of the world not a bunch of pixel patterns only.

I don't think there is something like "true understanding of the world". Every driver has his own "understanding of the (road) world". Some are careful, some are agressive. It depends on personal experience

Human drivers do this sort of thing all the time

In fact, literally today, I drove into the oncoming lane of traffic due to construction work. There wasn't signage or anything to do it, I just saw the car in front of me doing it and did it too. No idea if they did it for any sensible reason or not


I'm trying to imagine an incident probably 30 years ago. Pouring rain, I can see that up ahead everyone is sharply veering right, but I can't see what is prompting that behavior. I know *something's* up, but not what.

I get to the spot where everyone veers right--and see a double yellow. On my *right*. We were going straight, the road zigged (45 mph road, at the time they were only widening it from two lanes to 4 when the property was developed. While the road was "straight" it zigged back and forth because of the patchwork development), the rain was so hard we couldn't see the lane markings.


Yeah, human drivers make mistakes all the time and we are fine with that. We allow teenagers and elderly people drive, so why we can't accept that AI-steered cars will sometimes cause an accident.

Because teenagers and elderly can go for a trial for vehicular manslaughter. If Waymo drive into opposite traffic and killed someone, who is responsible? CEO of Waymo that knew that this could happen? If this happens multiple times, what is the value of human life? Even if Waymo become a safer driver than human, we need to answer these questions in regulation and common law.

For me, it is insane how many people need to die before anyone hold Tesla accountable for its autopilot [1].

https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...


Criminal charges are appropriate in situations where a human (or group of humans) acts with malicious or reckless intent that harms (or could harm) the community. Self-driving vehicles reduce (but do not eliminate) opportunities for malicious or reckless acts related to vehicles. Criminal charges would remain appropriate in situations where an operator or manufacturer (and/or their employees) are malicious or reckless concerning a vehicle's design, manufacturing, or operation. It's important to note that just because someone was injured or died doesn't mean that there are malicious or reckless acts involved.

This isn't a novel issue: the courts and legal system regularly deal with defective products, including vehicles, that cause injury or death. If a defective product causes damage, the injured party is typically compensated financially for their loss. They even have frameworks for determining https://en.wikipedia.org/wiki/Value_of_life in monetary terms.


The future for self driving cars are closed roads where only driverless cars are allowed.

The ethical issues when a driverless car kills someone cannot be solved in a reasonable way.

The question is which city/country will be the first to implement roads for driverless cars only ?


> The future for self driving cars are closed roads where only driverless cars are allowed.

Given that human-driven cars, trucks, cyclists are already on the roads and will be for quite a while to come, and pedestrians already cross it, you would have to build a whole new road network, with crossing points for human-driven vehicles/pedestrian traffic. Which is simply infeasible, both in terms of money, but also simply space, especially in built-up areas where the space is already fully utilised.


Quite right, it is infeasible to close roads for self driving cars only.

Which begs the question: How are self driving cars ever going to be mainstream?


> The future for self driving cars are closed roads where only driverless cars are allowed.

This is the present in Honolulu, as of last year: a fleet of self driving cars, operating on a closed road where only driverless cars are allowed, transports over 3000 people per day. The service is called "Skyline".


I looked it up and it seems to just be a typical driverless train, more or less the same as is found in airports for changing terminals, except that this is longer than those. It might be a good service, but not quite as exciting as your description.

https://en.wikipedia.org/wiki/Skyline_(Honolulu)


Well, yes: it was a tongue-in-cheek way to observe that requiring closed roads for self-driving vehicles would render the whole exercise pointless. We already have a more efficient solution for that problem, with no need for any exciting new technology!

On a larger scale: Copenhagen and Vancouver both have fully-automated metro systems (i.e. driverless systems). Presumably there are many other cities with such systems around the world, and they probably all work nicely.

Fine for getting around different areas of the cities, but it's not going to drive you wherever you want to go though.


> They will fare far worse than humans when given impossible directions.

That's why you keep roads regulated, clean and orderly.


But that's not always reality, especially around construction zones.

Humans don't always get it right. And things sometimes take out the temporary markings even if it was done correctly.

I've had to evade one of those big barrel cones rolling across the freeway. I suspect it fell off a truck rather than blew from road work, but there's no way to know. And would an AI car know that a barrel in front of it is a windblown hazard rather than an actual indication of a closed lane?

And I've clobbered one of the ordinary cones when a guy was confused by the construction zone and went straight from the left turn lane and I had to evade. Didn't see where it went and didn't think of trying to put it back. Again, is the AI going to be able to realize that smacking the cone was the right thing to do? (The cone was blocking off road in preparation for what was ahead, I didn't actually go into roadwork.)

Humans are far better than AIs at balancing things when faced with something having gone wrong.

AI--we put in a driverless bus on a short loop. It was in an accident within hours--any human driver would have understood to get out of the way of the truck or at least lay on the horn if that wasn't possible. The bus just sat there and let itself be hit by the backing truck. If everything is as it should be they probably already are better drivers than humans. It's just they aren't as good when confronted with things that are wrong.


A lot of human drivers could do with a bit more adherence to the rules. Maybe if you take the wrong off ramp, you continue down it and loop around instead of reversing back onto the highway, etc.

I think in their current state, AI cars are a bit of a menace, but presumably they will improve over time, and I think it's evident we've reached a local maximum for humans, possibly even a decline as everyone is in ever more of a hurry, maybe let me just bend the rules a little this one time, it'll be okay.


How is the in-car audio call between Waymo and the police initiated? I would want it to automatically initiate shortly after being pulled over. That would be easiest for emergency services. It would also assure for the passenger and be a way to give the passenger instructions, if needed. The downside would be cost to Waymo to staff a potentially unnecessary call.

There's a video in there, plays after first one is finished.

Car stops, windows open, it gets connected to someone who is either a bot or really sounds like one ("I appreciate" after each sentence is just so creepy).


Ah, thanks. Yeah, that customer service feels...Googly. A human using a warmer and more varied tone would better reassure passengers, and probably build a little more goodwill with cities.

Can’t wait for the first felony/high risk stop executed against a waymo, should lead to some good body cam footage (and hopefully nobody gets shot)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: