Hacker News new | comments | ask | show | jobs | submit login
A Waymo One Rider’s Experiences Highlight Autonomous Rideshare’s Shortcomings (futurism.com)
81 points by motiw 19 days ago | hide | past | web | favorite | 99 comments

Carl wants to live in the world where traffic is safe, no accidents happen, but wants AI that can drive in human-like aggressive manner. Carl's expectations are a bit too high, I'd say...

For my part, I'd just want the results to be very predictable. If I know that a drive from A to B will always take X minutes +/- 5%, I can just use that as part of my day doing whatever while the car drives itself.

That simply not going to happen until essentially all other cars on the road are self-driving cars.

And when the weather is perfect and the snowplows always clear the roads on time, etc.

There are too many variables. It's incredibly difficult to get consistent on-time performance out of a train system which has far fewer variables to anticipate. It will never happen for cars even if they are self-driving.

Unless they also have their own lane/road. Maybe in built up areas we will have to build tunnels for them.

I'm sure language policing will reduce fatalities to zero real soon now.

It’s mind blowing that even skilled software engineers were fooled into thinking that self driving cars were only months away. The hype was blinding.

The amount of complexity involved in a self driving car is unfathomable. They can’t even get the thing to work on simple trips in mega-sprawl Phoenix with perfect weather and solid lane markings. We are decades out, and maybe only if they ban human drivers.

> It’s mind blowing that even skilled software engineers were fooled into thinking that self driving cars were only months away. The hype was blinding.

Waymo was giving out multi-million dollar bonuses. You too will learn to believe in whatever it is required for you to believe to get those bonuses.

The hype is because not very long ago nobody even expected that what they’re doing now would be possible. That they can even do what they’re doing now is an amazing achievement.

It turns out that it’s not enough and we need another breakthrough, but it might not have been possible to see what the limitations of the new approach were going to be when they were doing new, previously impossible things every week.

I think this is similar to the belief in the 20th century that right after the moon, we’d be colonizing mars and exploring the galaxy. We had been bound to the surface of the earth for thousands of years of civilization and then suddenly it seemed like we had no limitations. You can probably excuse a little bit of excess enthusiasm.

> I think this is similar to the belief in the 20th century that right after the moon, we’d be colonizing mars and exploring the galaxy.

The takeaway from the history of 20th century technology is that a given field plateaus much more quickly than the optimists expect, sometimes short of what’s needed to be truly useful. Something simple like voice recognition has been an area of intense research for 30-40 years. And it’s still not useful for all but the simplest things. I have 10 Siri reminders from the last couple of days. 4 are so badly mangled I can’t figure out what I meant to remind myself. Barring some fundamental advance, we’re not going to be talking to our computers as a primary input.

The same thing with other technologies. We broke the sound barrier, but never developed the tech to make supersonic travel cost effective. We put a man on the moon, but have yet to develop the tech to colonize off world bodies. Transistor speed and density have plateaued. After years of doubling CPU clock speeds every year, Intel crawled from 3 GHz to 5 GHz over 15 years. (If CPU speeds improved so slowly back in the day, we’d be using 200 MHz Pentium Pros today.)

Likewise flying cars. They have existed for several years; I saw one fly in 2013. But they cost at least $250,000, or (for newer electric models) they have ridiculously short runtimes. These drawbacks weren't part of the plan in bubbly 1960s predictions, and they're not going to be quickly overcome.

In the 1970's we saw the rise of "safety first" as a philosophy for life. By the 1990 most people believed it. Safety first as a prime directive leads to doing nothing in the end. Almost none of previous tech advancements from controlling fire a million years ago to nuclear weapons could have happened following "safety first". I can't think of any new inventions (except recreational ones like wing suits, and a some medical devices and drugs) since the 1970's that can kill people when used. A huge number of things we use heavily would never be invented today. Steam engines, cars, trains, electricity, natural gas in homes, etc. Millions died as these inventions were improved and still kill millions today. The inventor of new things were only allowed to put "use at your own risk" on software, so that is where all the new stuff has happened in the last 50 years. That era is now ending as cyberwarfare, ransomware, "fake new" become serious problems and super AGI is popularized as a possibility.

Sure, S curves exist for all technologies, but now the only tech allowed to even start are ones that are thought to be powerless (ie., next to useless). We wanted flying cars but got 140 characters.

This is a pattern that AI demonstrates with every cycle, from the 1960s, 80s, and continuing now.

Some breakthrough solves 20% of a problem, and 40% is easy to get to. 60% takes exponentially more effort. It never gets to 80% and AI collapses into a winter. 100%, it seems, requires general intelligence and that is always receding into the future.

At some point, the technology is adopted as-is, and we learn to work around the problems.

It may not have been easy to see that another breakthrough is required, but it was very easy to see that most predictions were laughable.

The CEO of Waymo announced[1] in May 2018:

> Phoenix will be the first stop for Waymo's driverless transportation service, which is launching later this year. Soon, everyone will be able to call Waymo, using our app, and a fully self-driving car will pull up--with no one in the driver's seat-- to whisk them away to their destination. And that's just the beginning!

[1] https://youtu.be/ogfYd705cRs?t=5795

Similar, in that anyone with an understanding of science would not believe those things?

Decades out? Here's a real world report. https://www.reddit.com/r/SelfDrivingCars/comments/ab2nqe/way...

What part of that thread/video series are you calling attention to?

> They can’t even get the thing to work on simple trips

Clearly it does work for simple trips.

Trips are only simple in retrospect.

To "work", as self-driving car has to be able to deal with all contingencies, such as unexpected heavy traffic, pedestrians, kids, drunk college students, sudden rain, sudden fog, constructions, police redirecting traffic, etc.

Oh, and it also shouldn't kill anyone more than once in 100 million miles or so. Even if some trips turned out to be slight less simple than predicted.

You edited your comment without any mention of it. This is considered bad etiquette and is derailing to the conversation. Please try not to do that in the future.

dominotw 19 days ago [flagged]

It can be if we have the correct infrastructure. If we highlight thousands of ppl dying every year in dui, accidents ect. If we highlight this problem same way media highlights mass shootings. Mobilise ppl to buildout infrastructure that would save thousands of lives. But ofcourse nytimes and co doesn't care about that because its not left vs right clickbait.

Living in Phoenix for several years and working in Chandler (the headquarters for the self driving car division where my office was actually on the initial training route before they allowed passengers) I can say the cars suffer from being overly conservative. I don't think this is a bad thing for self driving cars, in contrast Ubers cars (before they were banned from the state for disabling collision detection and killing someone) drove aggressively. Ubers cars needed to be taken over to stop from hitting pedestrians crossing at intersections and continually drove at least 5mph above the speed limit.

In Phoenix everyone drives fast on the highway, but Waymo cars drive 5-10mph under the speed limit. There are some unprotected left turns across 4 lanes they may try but being conservative they crawl across and eventually get stuck in the middle. This is probably a good thing for now and as confidence with the engineering team on the hardware and software goes up it can likely be tuned.

I think a lot of people in this thread are underestimating the possibility of safety drivers going remote and how that could be a solution to these edge cases.

When making decisions or coming across strange situations the AI has a confidence level, and when that falls below a threshold it can notify a remote person to potentially handle the situation. Right now Waymo has problems merging on a freeway with asshole drivers... but what's the big deal? It's maybe 1 minute out of a 30 minute drive.

If you only need a human operator to take over the vehicle 2 minutes out of every hour of autonomous travel, then you could probably get by with 10 remote operators monitoring a fleet of 200 vehicles. That could translate into way better profit margins than any current ride-share service.

Now I'm imagining a guy whose job is to sit all day in a simulator vehicle, 'teleporting' from one stranded Waymo to the next and taking it around school buses and manhole covers, parallel parking, etc.

I kind of think that is viable. Replace thirty regular drivers with one. That kind of model doesn't look too far off.

We already do it with military UAVs. Assist-by-wire for vehicles seems like it's not too far-fetched.

When you hit a window when you need the 11th human, which you don't have online, what then? The vehicle facing that need just fails?

On a side note, I'd like to lament the dystopia of reading articles online, where every 4 paragraphs your attention is hijacked by a video ad. Fuck that. Fuck futurism.com.

Blame the fact that people will pay for very few media sites. Realistically, Futurism’s choice is have ad supported content or don’t exist

But is that even an option? I used to pay for NYT and WSJ (not cheap) and was still bombarded with ads.

or less blogspam. the explosion of mostly-blind news-regurgitation isn't a good thing, and it's largely chasing ad dollars because nobody would voluntarily pay for it.

They do have a third option: unobtrusive ads.

Those are worth very little when untargeted.

The NYT is a different story. There's a critical mass after which it becomes worthwhile to just target your audience. I doubt these guys are at it.

Don't believe me, though. Talk to ad execs (spending money on this) or ad ops (using the money on this) and see what they think.

In fact, since you won't have the GDPR hassles if you go untargeted, you should consider following this thesis to its logical untargeted exchange conclusion because with CCPA and GDPR you could push the targeted guys out of business and that's a many billion dollar market. If you're convinced of your thesis, then you're on the cusp of mega money.

Good luck.

I use the uMatrix plugin and have it configured to block everything, except of course for the base HTML. For more and more sites this means I get no content without adding in exceptions, but for this and many sites I get the part of the content I want, the text. Then I hit the Reader mode button and I get text which is even in the format that is most comfortable for reading for me.

this kind of dystopia existed in the printing press for centuries.

If that's the entire list of shortcomings it sounds they re in pretty good shape. There are audiences for which this car is an essential, and hopefully drivers will adapt to robocars , just like horsecars adapted to autos.

I’m not sure. So long as a safety driver is needed there’s no cost savings over regular taxi service. Given the human infrastructure that will be required behind the scenes to support the fragility of these cars’ operation (sensor cleaning, monitoring for problems, etc) and the added expense of the self driving tech, I’m not seeing much opening for money to be made here ever...

You may have missed that tiny detail, but they have not one, but two, safety drivers in each car. That is twice the number of drivers a normal human-driven cars have.

They are not supposed to talk to passengers, though, for the true driverless experience.

The other one isn't a safety driver.

And having the safety driver doesn't tell you all that much about how it would perform without.

> The other one isn't a safety driver.

True, the second person is the co-driver. As Waymo has discovered[1], a single person is not safe enough.

> And having the safety driver doesn't tell you all that much about how it would perform without.

Theoretically true. But in reality, it tells you all you need to know - the cars are not safe without safety drivers.

[1] https://www.theinformation.com/articles/troubled-waymo-worke...

> But in reality, it tells you all you need to know - the cars are not safe without safety drivers.

I disagree. There are other plausible motivations to have a safety driver at this stage even for a safe car.

Of course there are reasons to use a safety driver for some trips.

The point is that they won't do even a single trip without a safety driver. They've done some PR stunts - but they won't give a single real demo to a single real journalist without a safety driver.

Even with a safety driver, a few weeks before they "launched" a "commercial" "self-driving" service, they wouldn't let journalists pick their own destination.

Yeah, overall it looks pretty bad.

I like to think of autonomous vehicles like trains rather than cars in the general sense. They can operate on pre-existing tracks that were previously unexploited. A lot of track exists out there, and as the technology gets better, they can utilize a higher percentage of what's available.

this requires cultural change... living more densely, and being willing to share our commute with other people.

Cultural change is... often rather harder than technological innovation.

If anything, autonomous cars should enable us to live _less_ densely. If you're in a dense area already, you probably don't need a car at all.

I was responding to a comment about using trains.

Yes, autonomous cars would make spreading out easier.

I like these pieces for tempering the hype around driverless cars.

But it's also true it is still early and the technology will improve.

I personally don't believe it will become as reliable as people think. The last percent are always the hardest — but to make a driverless car better than a human, they're crucial.

But who cares what I (or others) think? We shall find out the truth of the matter soon enough. I just hope whoever mispredicted will take a note of it and keep that in mind the next time they make a prediction.

> The last percent are always the hardest — but to make a driverless car better than a human, they're crucial.

The last % being crucial is debatable. A self-driving car could be way more reliable than a human driver at all times except during blizzards in which case it's known to crash.

That car would be extremely reliable, viable and useful to operate everywhere there's never any blizzard, and even when there happens to be a few blizzard events per year it would only need to be automagically rendered inoperable during those times. The town hall could send that kind of command, just like it would ring a civil defense siren.

It's interesting to consider how much money is required to validate this idea -- there are multiple companies, many investors, and focused engineers trying to solve this problem. Are those resources emotionally driven to a short-term outcome that is proving to be far away (how far, and can this push result in a non-linear jump somewhere?). What is the real hypothesis here that bridges the idea and valuation creation?

The market is so huge that it's worth it even if it takes decades of R&D costing billions a year. Some numbers:

~100M cars are made a year

If 10% of those are self driving, and if they have $1,000 in gross margin due to self driving features, that's $10B a year to cover past R&D. Over 20 years this adds up to $200B. Optimistically, if the tech eventually reaches 80% of new cars and $3,000 margin, then we're talking $4,800 BILLION in profit over 20 years to cover R&D. Waymo is probably spending like $1-$2B/yr right now. Their engineering headcount is not huge, it's like under 1,000 based on a LinkedIn search I did last year.

Really the potential market is so huge that it can justify many many billions spent on R&D. I am relatively bearish on the technology, and doubt it will be commercialized in 10 years, but I still think the investment is justified.

Apologies for the poor writing - typed from my phone. I have more thoughts here: http://www.tedsanders.com/on-self-driving-cars/

> The market is so huge that it's worth it even if it takes decades of R&D costing billions a year.

This is a common trap novice founders fall into. They see that a market is huge, and simply say "well if we can just get a piece of the pie we'll be great!" It's like saying you only need to take 1% of search from Google. Yes, that is a huge opportunity, but how are you going to do it besides hopes and dreams?

If ~$10 billion invested turned into an actual self-driving technology that could be generalized, it would almost certainly (if competition also develops it you essentially have a commodity and are now in the airline industry where there's a huge moat but the margins are still awful) be worth it. But people are becoming more cognizant that maybe that technology just isn't ready yet and nobody knows when the next breakthrough will occur. Is it possible one of the competing companies already has it and just hasn't shared it? Technically...but I am very confident that right now the state of the art just isn't good enough and the people in charge are naively throwing money at the problem hoping it bears fruit.

I realize the person I am responding to largely shares my opinions on self-driving cars not being ready for prime time yet (I'm familiar with his website), I just wanted to point out a common fallacy that leads people to work on fruitless endeavors. In my opinion an AI winter is coming before self-driving cars are "real" - by which I mean a profitable solution that can be scaled.

I think the main point, as this article alludes to, is that we may end up with safe and actually fully autonomous vehicles, but they're bloody annoying to ride in. At that point nobody will be willing to pay that $1000 premium, and you'll never recoup costs.

Personally I think given the prior that we can achieve self-driving tech, there's a ~75% probability people won't want that tech.

We've seen it so many times before, companies spending $$$ on R&D of some cool new thing and people just go "meh." I don't see a compelling argument for why this won't just join the ranks of 3D-TVs, Google Glass and flying cars.

> I think the main point, as this article alludes to, is that we may end up with safe and actually fully autonomous vehicles, but they're bloody annoying to ride in. At that point nobody will be willing to pay that $1000 premium, and you'll never recoup costs.

I think you underestimate what a difference riding in back makes.

Yes, yes, when you drive, you get angry and aggressive. I do too; my observation from taking ride share (I gave away my car and now almost exclusively use the rideshare) is that almost everyone rages out from time to time when they drive.

Sitting in back with a book is such a different experience. "Wow, I used to be like that?" I mean, it seems so weird to see someone get mad about traffic when you haven't taken the wheel for a month yourself.


To be sure, if we don't get full level-4; if you still need to be the 'safety driver' and actually pay attention, that's a different thing, and yeah, that's a whole lot less useful.

And to be clear, I am not saying that I think we are close to full level-4. I'm just saying that if we get there? it will be wonderful in ways that weren't obvious to me until I switched from driving to being driven for quite some time.

The first generations of autonomous vehicles will almost certainly still have steering wheels, so if it's too timid at a turn/merge, you could override it like the safety driver in this anecdote did. There is immense value in being able to watch a movie or sleep, and trust that the worst thing that will happen is that the car will wait too long for an opening.

There's a few interesting things about self driving vehicles. One surprising thing is that for a very large autonomous taxi company, these vehicles could, in theory, operate at a profit while charging less per mile than it costs you to drive your own car. Economy of scale + vertical integration means maintenance, insurance, and other regular costs are going to be substantially cheaper, mile for mile, than consumer rates. And just consider the amount of money in play here. In 2016 US drivers (only considering cars, trucks, minivans, SUVs) drove 3.22 trillion miles. Imagine you get a marketshare of 10% of these miles. And you have a 'negligible' profit margin of 5 cents per mile. That's $16.1 billion in profit.

The potential there is quite remarkable. But it gets even more tempting because of some issues with self driving vehicles. Because of above the the biggest company will not only be able to operate for much cheaper than anybody else, but they'll also be able to offer a better service. They can have more regular cleaning/maintenance, and most importantly you'll be able to get a ride more reliably from them than from any other player and in the widest array of locations.

And consider the lobbying that will inevitably happen. The instant that self driving vehicles can show a higher average safety rating, companies such as Google/Waymo will lobby governments to start banning non-autonomous vehicles in urban areas, "for consumer safety." Pair all of these facts with rapidly declining vehicle ownership among the younger generations. Things could end up changing incredibly rapidly. And whoever ends up on top is going to control what might end up becoming the most profitable business to ever exist. We talked about 10% at a $0.05/mile profit margin. That meager $16 billion is an extremely conservative estimate for a single country.


And then you get into meta-aspects. Getting from A to B is one of the most important parts of basic modern living. It's hard to even imagine the implications if a single company was able to start gaining some significant degree of control on this aspect of life. And then there's the data. Not only would you know where huge chunks of the population is going to and fro at all times, but you'd also have a massive real time surveillance network of practically all of 'real life' society as each and every vehicle you control would need to be constantly autonomously surveilling everything around it. The possibilities there are mostly dystopic, but that sort of knowledge and power is going to be a major driver here.

I'm only about halfway through, but what are the chances 'Carl' doesn't get identified through this story? It seems like it'd be very easy for them to cross check this story with ride history to figure out the person, especially since it's a closed beta kind of thing it sounds like.

What kind of consequences will 'Carl' face by speaking to the press about this? Do you just get dropped from the program or are their real consequences? (It doesn't sound like 'Carl' is a Google employee at least)

I'm pretty sure I know who Carl is. I don't think he's concerned about his identity as much as the article has been written to make you believe. I think the author was trying to make it seem more like an inside scoop when it isn't.

Off topic: Is it just me or is the font on this site completely unreadable? I had to copy and paste it into a text file so I could actually read what was written. It's like medium but 80x worse.

Agreed. In Firefox I had to Ctrl+ 3-4 times to enlarge the text to make it readable.

It's pretty painful for me, too. I want to say the differing line thickness in a character is th reason, often becoming to thin. Especially for "A" and "e".

On desktop (mac w chrome) it looks fine. Maybe it rendered weird for you?

> it would seem Waymo programmed its autonomous vehicles to treat turn signals as an indicator that the car is about to execute a turn or merge, not an indicator that it is looking for the opportunity to do so.

Well, here's one, if not the issue... I do not know the precise wording of trafic Laws in each state, but in many if not all, turn signals should _not_ be used as "an indicator of opportunity".

Now, it is already difficult to build a law abiding autonomous car; expecting the car to also break the law in day-to-day cases (not just an emergency) will be downright impossible -- and I'm not even considering the media storm that would be created by a company saying "we program our cars to break the law all the time"...

> turn signals should _not_ be used as "an indicator of opportunity

Are you certain about that?

When I learned to drive, we were told to indicate a few seconds before turning, changing lanes, etc. This communicates your intentions to other drivers. In other words, the blinker says, “I’d like to get in the left lane” or “I’m getting off the highway”, not “This car is currently moving leftward.” Otherwise, why not just connect the turn signals directly to the steering wheel?

Before turning is the law. e.g. Maryland Code "(d) Where signals to be given.- When required, a signal of intention to turn right or left shall be given continuously during at least the last 100 feet traveled by the vehicle before turning; except that a bicyclist may interrupt the turning signal to maintain control of the bicycle."

There are actually many states where using the turn signals to change lanes isn't even legally required though, so things are more mixed when it comes to changing lanes vs. making turns.

> There are actually many states where using the turn signals to change lanes isn't even legally required though, so things are more mixed when it comes to changing lanes vs. making turns.

Arizona is one state where it is required judging by a warning I got from a friendly highway patrol officer who chased me ~5 miles while I was having fun on the old ninja during morning rush hour.

Yes of course, but it is a _signal_, not a request. You should _not_ put your signals if there is no room, just to "ask" other drivers to clear the other lane.

Suppose you're in the center lane during some stop-and-go traffic. You want to take the next exit. Do you really just drive along in quiet desperation, hoping for a gap?

I—and everyone I know—would leave their blinker as they creep forward, "requesting" that someone leave a gap big enough to take the exit.

Of course, the vast majority of drivers would do so (including me). That would just be incredibly inefficient to do otherwise. Nevertheless, in my home town, it is against the law (Ontario btw, so I confess I cannot speak for the US).

Same thing goes for the left turn on yellow light. If there is a gapless traffic coming from the other direction, you will probably just creep in the intersection and turn at the end of the yellow, when the other cars have stopped. Nevertheless, by doing so, you are doing two illegal things: 1) going further than the stop line and 2) passing on a yellow (or even a red) when you could have stopped (easily, you were already stopped). Is it a bad thing that people (again, including me) do this? Probably not, doing otherwise would be a huge loss of time for everyone and it does not really increase the accident rate. Yet, it is against the law.

Here's the text of Ontario's Highway Traffic Act: https://www.ontario.ca/laws/statute/90h08#BK232

“The driver or operator of a vehicle upon a highway, before turning to the left or right at any intersection or into a private road or driveway or from one lane for traffic to another lane for traffic or to leave the roadway, shall first see that the movement can be made in safety, and, if the operation of any other vehicle may be affected by the movement, SHALL GIVE A SIGNAL PLAINLY VISIBLE TO THE DRIVER OR OPERATOR OF THE OTHER VEHICLE OF THE INTENTION TO MAKE THE MOVEMENT."

I guess one reading of this suggests that you must first check if it would be safe to make a maneuver, and if so, then and only then, must you use a turn signal. I think you could also read this as requiring safety checks and indicating (in either order) before turning, but I'm not a lawyer. I can't imagine that people are actually prosecuted for signaling out-of-order though.

The driving handbook, however, tells you to check your mirrors, then "Signal that you want to move left or right." (here: https://www.ontario.ca/document/official-mto-drivers-handboo...) which seems to suggest some level of intention-indicating is okay.

>If there is a gapless traffic coming from the other direction, you will probably just creep in the intersection and turn at the end of the yellow, when the other cars have stopped.

While this is illegal in some states, it is not in others. Furthermore, the general rule on reds (again, this might vary by state) is not that you should not be in the intersection, but that you cannot enter the intersection. If you are in the intersection, when it turns red, you clear the intersection.

Isn't that exactly the discussion?

It is against the highway code to do that, so can't be programmed in, as this would be ultimate premeditation of breaking the law.

On the other hand, in the real world, this is how people sometimes use turn signals, so autonomous cars are at a practical disadvantage and can't merge when a human driver could.

For large sections of many highways at rush hour, there is essentially never a space to comfortably merge into. You have to essentially make spaces to merge into whether because someone sees you signal and lets you in or someone isn't really paying attention and has let a gap open up.

Really? My understanding was that the signal is a request to the vehicles behind you in that lane to yield and create space to let you in. That's how I always see it working here in Vancouver and it's a good system. People are very good about creating space in front of them when someone needs to be let in.

Edit: This depends a lot on context. The relative speeds of the two lanes, whether there's a merge, and so forth. If there already had to be enough space to safely change lanes without any other cars slowing down or reacting at all, then turn signals would become unnecessary. Often there's an opening, but to really make it safe the car behind you in your target lane does need to yield to make it safe, and the turn signal is a request that they do so (or at least not speed up).

It is a signal that I am going to get over into that lane one way or the other. You can choose to keep me from doing so in which case I'll get in front of one of the following vehicles and merge there. I don't know where you drive but that is absolutely normal and expected behavior where I do.

It seems like just one of the challenges for autonomous operations. Driving that is pretty much necessary in some locations (and to do otherwise would cause other drivers to route around you in unsafe ways) would often be seen as hopelessly aggressive in others.

In the uk I was taught that indicators were to signal intent, not action. Signaling that you intend to merge, even if you can’t merge currently, would be the thing to do.

I guess the rationale being something to do with giving other road users an idea that you car might do something so they can be prepared if you do.

"In the uk I was taught that indicators were to signal intent, not action."

In LA they are used to signal that you just finished crossing 5 lanes on the freeway all at once to get to the exit you almost missed :-)

Every time I go to L.A., I say to my wife, "Yay! I don't have to use my turn signals for a whole week!"

Yes, those are called victory blinks!

It's supposed to be: MIRROR SIGNAL MANEUVER.

By the time you've hit that indicator you've worked out what you want to do and had a bloody good look round to make sure there's no one in the way and now you're going.

Technically you shouldn't need the indicator because you're driving into the available space for the maneuver.

Just to be clear I wasn't saying "don't use your mirrors". I missed out all the stuff that was irellevant to the discussion like using your mirrors or starting the engine.

Also, it's a cute mnemonic device to help people remember to drive better, but as my driving instructor said, you might need to use "mirror, signal, mirror manouvre" because the mirror is to check what's going on, the indicator is to indicate intent, but you might not be able to manouvre immediately so you might need to check what's going on around you again before you actually make your move.

People who indiate as they are making their manouvre are a PITA and almost universally despised. Almost as much as people who don't use their indicators at all.

There's a few issues with the "Technically you shouldn't need the indicator" world you describe. First, you assume that there's flawless information available to you. You know where everyone and everything is and you've not missed anything. Indicators give people around you a chance to see what you're up to even if you don't know they are there, or don't think they are relevant to you.

Second, you're describing a world in which you always have space and time to make a manoeuvre without impacting others around you. My reality of driving round the North West of England is lots of cars in a small space. People who don't use their indicators are a PITA. It's just selfish. That sudden deceleration for a left turn or sitting in the middle of the road, are you turning right, or have you broken down? Using an inicator is just polite and helps other people drive more comfortably.

Third, it helps people plan. This is linked to the other two things, but I think it's important enough to have as a separate point. Have you ever seen pedestrians look at you/your indicators to work out wether they should cross a road? It happens when driving around town. One particular case I remember, I wanted to turn left, I was driving along and turned my indicator on. Guy about to cross the road, looks over his shoulder, sees my indicator and stops. I make my turn, he crosses the road. Smooth, organised, orderly. Sure, if he had just walked out I would have stopped in plenty of time, but on the main road, with people behind me wondering what I was doing.

> turn signals should _not_ be used as "an indicator of opportunity"

This is just flat wrong and makes no sense. If you want to merge, you’re required to use your turn signal ahead of time. If you don’t know when you’ll have an opening to merge, you may have it on for a long time.

Merging is different than changing lane or turning. In some states, you may have to give the right of way even if you do not change lane yourself.

But I agree it was bad wording on my part. My point was about those drivers who just leave their signal on for 2000 ft and expect that it gives them the right of way over the other vehicles in the lane they want to move -- and usually get angry when they do not.

> > it would seem Waymo programmed its autonomous vehicles to treat turn signals as an indicator that the car is about to execute a turn or merge, not an indicator that it is looking for the opportunity to do so.

> Well, here's one, if not the issue... I do not know the precise wording of trafic Laws in each state, but in many if not all, turn signals should _not_ be used as "an indicator of opportunity".

Certainly not true in NJ.

> A signal of intention to turn right or left when required shall be given continuously during not less than the last 100 feet traveled by the vehicle before turning.


My wording was bad, I admit it.

My point was that signals should be given only when you have the _opportunity_ to do such maneuver. If the lane to your right is crowded, putting your right signal on achieves nothing. In other words, your signals do not give you the right of way.

I do not think Waymo cars are unable to use their signals, but they are probably programmed not to put them on when the turn/lane change cannot be achieved anyway.

When someone indicates his intention to change lanes when there is no room, I slow down. I expect the same of others.

And when that doesn’t happen, you just continue on because chances are high that the next one will.

In many cases of heavy traffic, this is the only way to change lanes.

But that perfectly describes “about to execute a turn”. Not what the article is describing, which is to request that other cars open up a new (previously non-existent) space to allow for a lane change.

Human drivers will sometimes do this, but in my experience they’re just as likely to cut you off once they see you want to change lanes in front of them.

Isn’t it perfectly possible to signal intent even when there is no ability to do so?

>...turn signals should _not_ be used as "an indicator of opportunity".

VA Code §46.2-848: Every driver who intends to back, stop, turn, or partly turn from a direct line ... shall give the signals required in this article, plainly visible to the driver of such other vehicle, of his intention to make such movement.

Emphasis mine.

States typically follow MUTCD[1] traffic rules, but they can diverge on interpretation. One source of different interpretation is what to do on a solid red arrow. (in some states that means stop and proceed as a normal solid red light, in others it means absolutely no turn on read). I imagine there are other areas where rules are interpreted differently.


There's not much point to the signal if you don't provide it before you take the action.

In the case you hypothesize, signaling early may be illegal (and so a Waymo care wouldn't do it), but backing off ato yield space to a signaling car is still not illegal, so a Waymo car could do it.

Seemed like it fairly good from what I heard. It didn't crash which is the bar they are going for. Driving past a place not only is ok I'd be more than comfortable having the car do so. The issue opponents are going to have in the future is when waymo actually has a chance to go to the general public in one of these cities and doesn't crash. The number of riders will easily outstrip the people screaming. Just like the scooters they will leave for a while but the damage will have been done. There are no fans currently that depend on this but thats easily going to not be true.

I agree. When I'm dropping someone off, I often drive past the intended drop point, let them out across the street, etc, because there is an unreasonably high cost to hitting the exact drop point. It sounds like self driving cars are only allowed to do exact drops, which is an irrational constraint IMO. Clearly there's a trade-off between trip time and drop accuracy, and time is usually 10x more important than accuracy.

> ...because there is an unreasonably high cost to hitting the exact drop point.

That is literally the difference between a professional driver and a "ride share" amateur.

Yeah, yeah, cheap shot but those ride share drivers drive me crazy with their antics.

Yeah I don’t necessarily care if it doesn’t take the most efficient route if it’s within a reasonable amount of time and the ride is pleasant, and I’m not getting charged extra for the detour. Going around the block seems okay, but not taking the freeway and it taking 20 minutes longer definitely isn’t.

What good is a newborn baby? Benjamin Franklin [1]

[1] https://www.americanheritage.com/content/“what-good-new-born...

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact