There are too many variables. It's incredibly difficult to get consistent on-time performance out of a train system which has far fewer variables to anticipate. It will never happen for cars even if they are self-driving.
The amount of complexity involved in a self driving car is unfathomable. They can’t even get the thing to work on simple trips in mega-sprawl Phoenix with perfect weather and solid lane markings. We are decades out, and maybe only if they ban human drivers.
Waymo was giving out multi-million dollar bonuses. You too will learn to believe in whatever it is required for you to believe to get those bonuses.
It turns out that it’s not enough and we need another breakthrough, but it might not have been possible to see what the limitations of the new approach were going to be when they were doing new, previously impossible things every week.
I think this is similar to the belief in the 20th century that right after the moon, we’d be colonizing mars and exploring the galaxy. We had been bound to the surface of the earth for thousands of years of civilization and then suddenly it seemed like we had no limitations. You can probably excuse a little bit of excess enthusiasm.
The takeaway from the history of 20th century technology is that a given field plateaus much more quickly than the optimists expect, sometimes short of what’s needed to be truly useful. Something simple like voice recognition has been an area of intense research for 30-40 years. And it’s still not useful for all but the simplest things. I have 10 Siri reminders from the last couple of days. 4 are so badly mangled I can’t figure out what I meant to remind myself. Barring some fundamental advance, we’re not going to be talking to our computers as a primary input.
The same thing with other technologies. We broke the sound barrier, but never developed the tech to make supersonic travel cost effective. We put a man on the moon, but have yet to develop the tech to colonize off world bodies. Transistor speed and density have plateaued. After years of doubling CPU clock speeds every year, Intel crawled from 3 GHz to 5 GHz over 15 years. (If CPU speeds improved so slowly back in the day, we’d be using 200 MHz Pentium Pros today.)
Sure, S curves exist for all technologies, but now the only tech allowed to even start are ones that are thought to be powerless (ie., next to useless). We wanted flying cars but got 140 characters.
Some breakthrough solves 20% of a problem, and 40% is easy to get to. 60% takes exponentially more effort. It never gets to 80% and AI collapses into a winter. 100%, it seems, requires general intelligence and that is always receding into the future.
At some point, the technology is adopted as-is, and we learn to work around the problems.
The CEO of Waymo announced in May 2018:
> Phoenix will be the first stop for Waymo's driverless transportation service, which is launching later this year. Soon, everyone will be able to call Waymo, using our app, and a fully self-driving car will pull up--with no one in the driver's seat-- to whisk them away to their destination. And that's just the beginning!
Clearly it does work for simple trips.
To "work", as self-driving car has to be able to deal with all contingencies, such as unexpected heavy traffic, pedestrians, kids, drunk college students, sudden rain, sudden fog, constructions, police redirecting traffic, etc.
Oh, and it also shouldn't kill anyone more than once in 100 million miles or so. Even if some trips turned out to be slight less simple than predicted.
In Phoenix everyone drives fast on the highway, but Waymo cars drive 5-10mph under the speed limit. There are some unprotected left turns across 4 lanes they may try but being conservative they crawl across and eventually get stuck in the middle. This is probably a good thing for now and as confidence with the engineering team on the hardware and software goes up it can likely be tuned.
When making decisions or coming across strange situations the AI has a confidence level, and when that falls below a threshold it can notify a remote person to potentially handle the situation. Right now Waymo has problems merging on a freeway with asshole drivers... but what's the big deal? It's maybe 1 minute out of a 30 minute drive.
If you only need a human operator to take over the vehicle 2 minutes out of every hour of autonomous travel, then you could probably get by with 10 remote operators monitoring a fleet of 200 vehicles. That could translate into way better profit margins than any current ride-share service.
I kind of think that is viable. Replace thirty regular drivers with one. That kind of model doesn't look too far off.
Discussed on HN: https://news.ycombinator.com/item?id=18920079
Don't believe me, though. Talk to ad execs (spending money on this) or ad ops (using the money on this) and see what they think.
In fact, since you won't have the GDPR hassles if you go untargeted, you should consider following this thesis to its logical untargeted exchange conclusion because with CCPA and GDPR you could push the targeted guys out of business and that's a many billion dollar market. If you're convinced of your thesis, then you're on the cusp of mega money.
They are not supposed to talk to passengers, though, for the true driverless experience.
And having the safety driver doesn't tell you all that much about how it would perform without.
True, the second person is the co-driver. As Waymo has discovered, a single person is not safe enough.
> And having the safety driver doesn't tell you all that much about how it would perform without.
Theoretically true. But in reality, it tells you all you need to know - the cars are not safe without safety drivers.
I disagree. There are other plausible motivations to have a safety driver at this stage even for a safe car.
The point is that they won't do even a single trip without a safety driver. They've done some PR stunts - but they won't give a single real demo to a single real journalist without a safety driver.
Even with a safety driver, a few weeks before they "launched" a "commercial" "self-driving" service, they wouldn't let journalists pick their own destination.
Cultural change is... often rather harder than technological innovation.
Yes, autonomous cars would make spreading out easier.
But it's also true it is still early and the technology will improve.
I personally don't believe it will become as reliable as people think. The last percent are always the hardest — but to make a driverless car better than a human, they're crucial.
But who cares what I (or others) think? We shall find out the truth of the matter soon enough. I just hope whoever mispredicted will take a note of it and keep that in mind the next time they make a prediction.
The last % being crucial is debatable. A self-driving car could be way more reliable than a human driver at all times except during blizzards in which case it's known to crash.
That car would be extremely reliable, viable and useful to operate everywhere there's never any blizzard, and even when there happens to be a few blizzard events per year it would only need to be automagically rendered inoperable during those times. The town hall could send that kind of command, just like it would ring a civil defense siren.
~100M cars are made a year
If 10% of those are self driving, and if they have $1,000 in gross margin due to self driving features, that's $10B a year to cover past R&D. Over 20 years this adds up to $200B. Optimistically, if the tech eventually reaches 80% of new cars and $3,000 margin, then we're talking $4,800 BILLION in profit over 20 years to cover R&D. Waymo is probably spending like $1-$2B/yr right now. Their engineering headcount is not huge, it's like under 1,000 based on a LinkedIn search I did last year.
Really the potential market is so huge that it can justify many many billions spent on R&D. I am relatively bearish on the technology, and doubt it will be commercialized in 10 years, but I still think the investment is justified.
Apologies for the poor writing - typed from my phone. I have more thoughts here: http://www.tedsanders.com/on-self-driving-cars/
This is a common trap novice founders fall into. They see that a market is huge, and simply say "well if we can just get a piece of the pie we'll be great!" It's like saying you only need to take 1% of search from Google. Yes, that is a huge opportunity, but how are you going to do it besides hopes and dreams?
If ~$10 billion invested turned into an actual self-driving technology that could be generalized, it would almost certainly (if competition also develops it you essentially have a commodity and are now in the airline industry where there's a huge moat but the margins are still awful) be worth it. But people are becoming more cognizant that maybe that technology just isn't ready yet and nobody knows when the next breakthrough will occur. Is it possible one of the competing companies already has it and just hasn't shared it? Technically...but I am very confident that right now the state of the art just isn't good enough and the people in charge are naively throwing money at the problem hoping it bears fruit.
I realize the person I am responding to largely shares my opinions on self-driving cars not being ready for prime time yet (I'm familiar with his website), I just wanted to point out a common fallacy that leads people to work on fruitless endeavors. In my opinion an AI winter is coming before self-driving cars are "real" - by which I mean a profitable solution that can be scaled.
Personally I think given the prior that we can achieve self-driving tech, there's a ~75% probability people won't want that tech.
We've seen it so many times before, companies spending $$$ on R&D of some cool new thing and people just go "meh." I don't see a compelling argument for why this won't just join the ranks of 3D-TVs, Google Glass and flying cars.
I think you underestimate what a difference riding in back makes.
Yes, yes, when you drive, you get angry and aggressive. I do too; my observation from taking ride share (I gave away my car and now almost exclusively use the rideshare) is that almost everyone rages out from time to time when they drive.
Sitting in back with a book is such a different experience. "Wow, I used to be like that?" I mean, it seems so weird to see someone get mad about traffic when you haven't taken the wheel for a month yourself.
To be sure, if we don't get full level-4; if you still need to be the 'safety driver' and actually pay attention, that's a different thing, and yeah, that's a whole lot less useful.
And to be clear, I am not saying that I think we are close to full level-4. I'm just saying that if we get there? it will be wonderful in ways that weren't obvious to me until I switched from driving to being driven for quite some time.
The potential there is quite remarkable. But it gets even more tempting because of some issues with self driving vehicles. Because of above the the biggest company will not only be able to operate for much cheaper than anybody else, but they'll also be able to offer a better service. They can have more regular cleaning/maintenance, and most importantly you'll be able to get a ride more reliably from them than from any other player and in the widest array of locations.
And consider the lobbying that will inevitably happen. The instant that self driving vehicles can show a higher average safety rating, companies such as Google/Waymo will lobby governments to start banning non-autonomous vehicles in urban areas, "for consumer safety." Pair all of these facts with rapidly declining vehicle ownership among the younger generations. Things could end up changing incredibly rapidly. And whoever ends up on top is going to control what might end up becoming the most profitable business to ever exist. We talked about 10% at a $0.05/mile profit margin. That meager $16 billion is an extremely conservative estimate for a single country.
And then you get into meta-aspects. Getting from A to B is one of the most important parts of basic modern living. It's hard to even imagine the implications if a single company was able to start gaining some significant degree of control on this aspect of life. And then there's the data. Not only would you know where huge chunks of the population is going to and fro at all times, but you'd also have a massive real time surveillance network of practically all of 'real life' society as each and every vehicle you control would need to be constantly autonomously surveilling everything around it. The possibilities there are mostly dystopic, but that sort of knowledge and power is going to be a major driver here.
What kind of consequences will 'Carl' face by speaking to the press about this? Do you just get dropped from the program or are their real consequences? (It doesn't sound like 'Carl' is a Google employee at least)
Well, here's one, if not the issue... I do not know the precise wording of trafic Laws in each state, but in many if not all, turn signals should _not_ be used as "an indicator of opportunity".
Now, it is already difficult to build a law abiding autonomous car; expecting the car to also break the law in day-to-day cases (not just an emergency) will be downright impossible -- and I'm not even considering the media storm that would be created by a company saying "we program our cars to break the law all the time"...
Are you certain about that?
When I learned to drive, we were told to indicate a few seconds before turning, changing lanes, etc. This communicates your intentions to other drivers. In other words, the blinker says, “I’d like to get in the left lane” or “I’m getting off the highway”, not “This car is currently moving leftward.” Otherwise, why not just connect the turn signals directly to the steering wheel?
There are actually many states where using the turn signals to change lanes isn't even legally required though, so things are more mixed when it comes to changing lanes vs. making turns.
Arizona is one state where it is required judging by a warning I got from a friendly highway patrol officer who chased me ~5 miles while I was having fun on the old ninja during morning rush hour.
I—and everyone I know—would leave their blinker as they creep forward, "requesting" that someone leave a gap big enough to take the exit.
Same thing goes for the left turn on yellow light. If there is a gapless traffic coming from the other direction, you will probably just creep in the intersection and turn at the end of the yellow, when the other cars have stopped.
Nevertheless, by doing so, you are doing two illegal things: 1) going further than the stop line and 2) passing on a yellow (or even a red) when you could have stopped (easily, you were already stopped). Is it a bad thing that people (again, including me) do this? Probably not, doing otherwise would be a huge loss of time for everyone and it does not really increase the accident rate. Yet, it is against the law.
“The driver or operator of a vehicle upon a highway, before turning to the left or right at any intersection or into a private road or driveway or from one lane for traffic to another lane for traffic or to leave the roadway, shall first see that the movement can be made in safety, and, if the operation of any other vehicle may be affected by the movement,
SHALL GIVE A SIGNAL PLAINLY VISIBLE TO THE DRIVER OR OPERATOR
OF THE OTHER VEHICLE OF THE INTENTION TO MAKE THE MOVEMENT."
I guess one reading of this suggests that you must first check if it would be safe to make a maneuver, and if so, then and only then, must you use a turn signal. I think you could also read this as requiring safety checks and indicating (in either order) before turning, but I'm not a lawyer. I can't imagine that people are actually prosecuted for signaling out-of-order though.
The driving handbook, however, tells you to check your mirrors, then "Signal that you want to move left or right." (here: https://www.ontario.ca/document/official-mto-drivers-handboo...) which seems to suggest some level of intention-indicating is okay.
While this is illegal in some states, it is not in others. Furthermore, the general rule on reds (again, this might vary by state) is not that you should not be in the intersection, but that you cannot enter the intersection. If you are in the intersection, when it turns red, you clear the intersection.
It is against the highway code to do that, so can't be programmed in, as this would be ultimate premeditation of breaking the law.
On the other hand, in the real world, this is how people sometimes use turn signals, so autonomous cars are at a practical disadvantage and can't merge when a human driver could.
Edit: This depends a lot on context. The relative speeds of the two lanes, whether there's a merge, and so forth. If there already had to be enough space to safely change lanes without any other cars slowing down or reacting at all, then turn signals would become unnecessary. Often there's an opening, but to really make it safe the car behind you in your target lane does need to yield to make it safe, and the turn signal is a request that they do so (or at least not speed up).
It seems like just one of the challenges for autonomous operations. Driving that is pretty much necessary in some locations (and to do otherwise would cause other drivers to route around you in unsafe ways) would often be seen as hopelessly aggressive in others.
I guess the rationale being something to do with giving other road users an idea that you car might do something so they can be prepared if you do.
In LA they are used to signal that you just finished crossing 5 lanes on the freeway all at once to get to the exit you almost missed :-)
By the time you've hit that indicator you've worked out what you want to do and had a bloody good look round to make sure there's no one in the way and now you're going.
Technically you shouldn't need the indicator because you're driving into the available space for the maneuver.
Also, it's a cute mnemonic device to help people remember to drive better, but as my driving instructor said, you might need to use "mirror, signal, mirror manouvre" because the mirror is to check what's going on, the indicator is to indicate intent, but you might not be able to manouvre immediately so you might need to check what's going on around you again before you actually make your move.
People who indiate as they are making their manouvre are a PITA and almost universally despised. Almost as much as people who don't use their indicators at all.
There's a few issues with the "Technically you shouldn't need the indicator" world you describe. First, you assume that there's flawless information available to you. You know where everyone and everything is and you've not missed anything. Indicators give people around you a chance to see what you're up to even if you don't know they are there, or don't think they are relevant to you.
Second, you're describing a world in which you always have space and time to make a manoeuvre without impacting others around you. My reality of driving round the North West of England is lots of cars in a small space. People who don't use their indicators are a PITA. It's just selfish. That sudden deceleration for a left turn or sitting in the middle of the road, are you turning right, or have you broken down? Using an inicator is just polite and helps other people drive more comfortably.
Third, it helps people plan. This is linked to the other two things, but I think it's important enough to have as a separate point. Have you ever seen pedestrians look at you/your indicators to work out wether they should cross a road? It happens when driving around town. One particular case I remember, I wanted to turn left, I was driving along and turned my indicator on. Guy about to cross the road, looks over his shoulder, sees my indicator and stops. I make my turn, he crosses the road. Smooth, organised, orderly. Sure, if he had just walked out I would have stopped in plenty of time, but on the main road, with people behind me wondering what I was doing.
This is just flat wrong and makes no sense. If you want to merge, you’re required to use your turn signal ahead of time. If you don’t know when you’ll have an opening to merge, you may have it on for a long time.
But I agree it was bad wording on my part. My point was about those drivers who just leave their signal on for 2000 ft and expect that it gives them the right of way over the other vehicles in the lane they want to move -- and usually get angry when they do not.
> Well, here's one, if not the issue... I do not know the precise wording of trafic Laws in each state, but in many if not all, turn signals should _not_ be used as "an indicator of opportunity".
Certainly not true in NJ.
> A signal of intention to turn right or left when required shall be given continuously during not less than the last 100 feet traveled by the vehicle before turning.
My point was that signals should be given only when you have the _opportunity_ to do such maneuver. If the lane to your right is crowded, putting your right signal on achieves nothing. In other words, your signals do not give you the right of way.
I do not think Waymo cars are unable to use their signals, but they are probably programmed not to put them on when the turn/lane change cannot be achieved anyway.
And when that doesn’t happen, you just continue on because chances are high that the next one will.
In many cases of heavy traffic, this is the only way to change lanes.
Human drivers will sometimes do this, but in my experience they’re just as likely to cut you off once they see you want to change lanes in front of them.
VA Code §46.2-848: Every driver who intends to back, stop, turn, or partly turn from a direct line ... shall give the signals required in this article, plainly visible to the driver of such other vehicle, of his intention to make such movement.
That is literally the difference between a professional driver and a "ride share" amateur.
Yeah, yeah, cheap shot but those ride share drivers drive me crazy with their antics.