Google is "losing out" in the sense that it's actually making an effort to perfect the technology to the point where a car can safely drive itself with no driver, while everyone else (Tesla, Uber, etc.) is happy to rush to market with options that still require an attentive person behind the wheel. These are different focuses that have different outcomes and time scales, as the last part of the article says.
Little bit of a tangent here, but I've always hated the fact that Tesla calls its assisted driving thingy "autopilot", which clearly implies complete autonomy, while still requiring a person to stay alert and intervene at a moment's notice.
What Google is trying to do is an actual autopilot, which is much more difficult, I suppose.
The irony is that people think autopilots work this way but they don't. A simple one might just keep the plane flying in a straight line. So "autopilot" is a better metaphor for what Tesla is doing than what Google is doing.
When flying on instruments, air traffic control keeps planes apart. It's up to the pilot to program the autopilot correctly. (There are other collision avoidance systems but planes can fly on instruments without relying on them.)
But in the end they're not really comparable. Collision avoidance for airplanes is a different problem.
The public's perception and 'common sense' understanding of technology is sadly very limited. So it wouldn't be difficult to 'toy' with it. But I don't think that's the point here.
As long as they are telling people they have to be alert while using autopilot I don't see a problem with it. The 'common sense' assumption then would be that autopilot = assisted driving rather being an autonomous self-driving car. Therefore Google et al shouldn't be using the term autopilot when they mean autonomous self-driving vehicle.
Eventually as this tech becomes wide-spread the distinction will become common knowledge to people.
This is mostly just semantics... context is everything. The important thing is preventing people from dying. Which means looking beyond marketing material to educate people.
Telling people to remain alert will do no good because it is simply asking for too much. A person is supposed to sit still, not driving but maintaining their full attention while being ready to automatically take over in an emergency situation. This will lead to the same failure of vigilance over time we see in guard duty and it really can't be helped. Task unrelated thoughts crop up more frequently for simpler tasks; with few stimuli, low surprises and little to attend to, the brain will begin with experience replay (this is actually more a machine learning term but it's appropriate here).
Driving already leads to mind wandering states; it is overwhelmingly likely that the passive aspect of autopilot will lead to mind wandering at even higher rates. Asking a less practiced driver to shift attention from internal to external states very quickly and then make complex judgments is simply not fair. A simple physics and statistics based model will be far more reliable. If it isn't, then Google's strategy of shooting straight to level 4 makes sense.
> The public's perception and 'common sense' understanding of technology is sadly very limited
Common sense is contextual and one of the more complicated aspects of cognition; it depends on the level of detail in the model being used to make inferences. A model's sophistication is dictated by internal preferences and goals. If most people's understanding of a technology is limited, then they're going to be doing what looks like averaging over distinct possibilities to a more informed model.
It doesn't help that if you know nothing about technical uses of the term autopilot but do know something about words (which will be the case for most) then in truth, it is the aviation industry that has misnomered.
It's a straight-up terrible misappropriation of the term, and it's clearly done for marketing purposes. Taking a word that the public thinks means one thing, then telling them that technically, they're wrong in what they think... that's just terrible.
It's not a matter of 'educating people', merely so they can use a popular term. It's just plain the wrong thing to do.
As much as it pains me, you are right. As a sailor I would never rely on my autopilot to get me anywhere except in a straight line. The term has a connotation of coolness because it directly references similar features in the airline and yachting industry, if you understand what it means but the unwashed masses don't comprehend this.
They're telling people that essentially in the fine print. If they wanted to be transparent about it, they'd have called it something like 'Assisted driving', which unambiguously requires your attention. 'Autopilot' could mean assisted and it could mean full autonomy.
>>They're telling people that essentially in the fine print.
The car issues obvious and repetitive warnings about what the technology is capable of and what the driver has to do, and turns off the Autopilot if the driver doesn't pay attention. I wouldn't call that "fine print".
What's the point of using it, when I have to stay at the wheel and be alert, and be ready to steer at a moment's notice, while driving a dumb car?
Either I can fully rely on the system to get me from point A to point B, or I have to fully concentrate on driving. People aren't robots - they can't go from half-assed sorta-paying attention to a split-second life-saving reaction.
That's not a fair argument, 1. Cruise control makes cars easier to drive, adaptive cruise control even more so. 2. The end goal is full autonomy anyway; Tesla seems to be the only manufacturer to release incremental updates towards reaching that.
Cruise control is as far from autonomous driving as a piece of graph paper is from being a computer. Nobody has ever described it as a paradigm shift, and its invention did not prompt unsubstantiated speculation about how driverless taxicabs are literally two years away. And yes, from a safety perspective, Tesla's autopilot is no different from cruise control. Keep your hands on the wheel, your feet on the pedals, and constantly pay attention.
Why are you assuming that other manufacturers are not working towards full autonomy? I strongly doubt that everyone at Ford, GM, VW, Toyota, and Honda is asleep at the wheel... Especially when their luxury vehicles are all incrementally moving towards autonomy.
They certainly have a lot fewer PR pieces about how amazing their autonomous-but-not-really lane assist is.
I know what you mean, but I think I've got to disagree. The general sense of "Autopilot" implies the same level of sophistication that other vehicles have that use "Autopilot"... Boats use it for mid course, as do planes, now having cars using it. It won't park, and it's not going to get out the carpark and onto the road, but it'll take over once the course is set and there's less technicalities on direction.
There's a mismatch between the way professionals and the public use the term 'autopilot'. Pilots know it's an aid, not a turn-key solution. The public misses that important detail. Since Tesla must understand this effect, I resent them using this a marketing term since they can exploit the misunderstanding while not being technically wrong.
By the way, I'm pretty sure completely autonomous parallel parking is already available retail.
If you can get more than 50% of people to understand what your thing does just from the name I'd say that pretty good, actually. Literally no one is turning on Tesla's auto pilot with no information on how it works beyond the name.
Not many people would literally declare "I confidently believe that airplane autopilot is a turnkey solution". But perhaps the point is that a lot of people would vaguely feel that way without exactly crystallizing it as a thought. And those vague feelings would be what the branding is seen to be playing off.
I fall into this trap so many times. You know in your logical brain that what you see isn't a good enough sample for any kind of statistical significance but at the same time you keep seeing things that every other part of your brain reckons is significant.
Like I see Belgian drivers driving terribly every day and I'm pretty certain that they're much worse than British and German drivers in every way. But the only statistical thing I can go back to is the number of deaths [1], which could include other aspects rather than just stupid driving.
My brain wants to shout out that their's clearly a problem, but that 'clearly' is only on the stretch of road I see. It could be that Belgian drivers are really good everywhere else in Belgium.
> By the way, I'm pretty sure completely autonomous parallel parking is already available retail.
it is. Tesla was also beta-testing (last I saw a few months ago anyway) completely autonomous head-in/rear-in parking including finding the spot in a crowded parking lot.
>>Since Tesla must understand this effect, I resent them using this a marketing term since they can exploit the misunderstanding while not being technically wrong.
I resent that you're accusing Tesla of intentionally misleading the public on what the feature does.
The car issues very obvious and repetitive warnings to the driver to explain the shortcomings of the technology and what the driver has to do to compensate. If the driver fails to act in accordance with the warnings, the car turns off the feature.
I mean, I dunno. To me, it looks as if Tesla has already gone above and beyond to communicate accurately, and the main reason they're pushing things even further is because of the intense media scrutiny on Tesla accidents (which the incumbent manufacturers probably love).
I can't agree more. For a company that is expected to understand the science behind the control systems they develop, they absolutely do not get a free pass on what would historically be called "human factors".
They are both fully aware of the importance of the human awareness implications not expecting continuous input from the human driver and choosing to encourage it.
Humans fundamentally can't stay attentive when their attention has no reactive feedback loop and they're only using the "stay attentive" (even though we know that it's essentially impossible at a psychological level) argument to shield themselves from legal liability.
It's both fair play because they knew in advance (and have since proven) that they can beat the odds against a human driver, and have simultaneously built in their legal defense to mitigate their downside. "We told you to pay attention. Your death is not on our hands."
A plane autopilot is supposed to be capable of flying the plane unattended and handing over control in an orderly, controlled fashion before critical problems arise [1]. A ships autopilot is expected to do the same. The same should be expected from a car autopilot. The fact that the environment is much much simpler for planes and ships doesn't change the fundamental expectation. As long as car autopilots are not capable of fulfilling the expectation that they can autonomously and safely control a car in a given real-worlds environment with other participant they should not be marketed as autopilots. They're driving assist programs.
[1] there are examples where this did not happen, but they're considered a failure.
As far as I know there isn't any autopilot that can fly a plane unattended unless you are flying on instruments. Collisions are avoided by relying on the air traffic control system knowing where all the planes are scheduled to be. (Plus plenty of backup warning systems.)
You can still use an autopilot when flying visually but then it's up to you to watch for traffic. This is using it like cruise control.
> Collisions are avoided by relying on the air traffic control system knowing where all the planes are scheduled to be. (Plus plenty of backup warning systems.)
Yes, perfectly fine. If Teslas autopilot can rely on external traffic control to provide that information, call it autopilot. As long as it can't don't.
That seems roughly equivalent to taxiing in a plane, something which isn't handled by a plane's autopilot. After using it, it feels very intuitive to understand what it can and can't handle, and it does degrade very gracefully (beeps at you and says 'take steering wheel to maintain speed' if it starts to get confused.) It works very much how I would have expected knowing nothing more than the name.
But a technically very similar set of features results in a vastly different user experience. A driver with a Tesla autopilot has a way smaller margin of error and needs to react a lot quicker once the autopilot encounters a situation it can't handle. Flying planes generally have way larger error margins, and a lot of effort goes into making sure they are nicely separated from traffic and obstacles (and when they come close, it's known early and the pilots can get ready to take over). Until we have highways with similar traffic control cars can't match that without a lot more intelligence.
There are other systems on the plane (TCAS, ATC communications) that provide information about conflicting traffic. Does Tesla provide those? Will there be enough time to react if it did?
There are far fewer obstacles in the air than on the ground.
If you put a plane on autopilot and then go read a book for some hours, most likely nothing bad is going to happen (particularly on IFR, but even on VFR, realistically).
Tesla's autopilot is more advanced than aircraft autopilots, I'd say, yet more dangerous (for now) because of all these darned cars and trucks and pedestrians.
In that sense, "Autopilot" was an unfortunate and overly ambitious name.
Autopilot (before Tesla) is a feature on modern aircraft. If you'll notice, aircraft always have an attentive person behind the wheel (if it's a commercial flight, two).
A plane can fly itself under normal conditions (highways for Tesla) and many can land themselves now (Model S can park / summon feature).
TCAS Resolution Advisory will give an attentive pilot a good 5 seconds to react in the worst case. How much time does a Tesla 'pilot' have in case of "oh, crap, there is a truck stopped 20 feet in front of you that I failed to detect at highway speed. Now you drive!"?
> TCAS Resolution Advisory will give an attentive pilot a good 5 seconds
> "oh, crap, there is a truck stopped 20 feet in front of you that I failed to detect at highway speed.
To appreciate your differences in measurement. 5 seconds in a plane is something about a mile away. A high performance car can still stop from 60-0 at 110ft; which is about 1 second. In context of time; it takes at least .7 second for the human to respond.
(The car or terrain monitoring autopilot can react in <.1 s; which means if the car sees the obstacle at 110 feet away it can stop in time, while, if a car pulls out in the highway in front of a human it would take closer to 200ft to stop.)
> The car or terrain monitoring autopilot can react in <.1 s; which means if the car sees the obstacle at 110 feet away it can stop in time
What happens if the driver expects the car to react upon seeing the obstacle but it fails to do so? Will the driver have enough of the remaining time to react?
Nope. And in the case of terrain following airplane algorithms, it's practically impossible to maintain minimum clearance AND allow for any expectation of pilot override.
The >=.7 second human reaction time (with a planned/known reaction such as brakes or jerking steering wheel) is the very reason why Autopilots are safer.
The differences between TCAS and Tesla are even more extreme.
> TCAS Resolution Advisory will give an attentive pilot a good 5 seconds to react in the worst case
5 seconds in order to avoid violating intruder's airspace, not to avoid actually hitting the intruder. AFAIK there's even then some additional buffer zone.
Also, reading the list of actual TCAS advisories sheds additional light of the substantial differences:
1. Almost all advisories actually tell the pilot what the problem is and what to do, not just that there's a problem. Tesla Autopilot basically has one advisory: "TA".
2. Almost all of the advisories resolve collisions by making one-dimensional maneuvers. Having an extra dimension allows you to substantially simplify the collision avoidance problem.
3. It's a safe bet (although not assumed) that both planes involved in a TCAS event are receiving advisories and cooperating with ground ATC.
If I were Google, I would be concerned that competitors are getting much more data much more cheaply than they are and that recent advances in image recognition/processing make the tech that much easier for Ford, etc to build in-house without having to license it from Google.
Though I guess Google/Uber are probably getting significantly higher resolution data than Tesla/MobileEye/etc since you can't stream all sensor data back to HQ over LTE.
I think a lot of these articles are driven by the fact that Google has been working on this for so long and yet we have upstarts that are getting cars on the road in a fraction of the time.
It's very possible that Google overestimated the problems with Level 3 Autonomy and the public backlash associated with it and building L3 tech as an on-ramp towards L4 may turn out to be the correct path to market and funding further development effort.
Google had come out saying they don't believe that an incremental approach will lead to fully driverless cars; at some point the leap has to be made. Sooner rather than later in Google's case.
Given that Google doesn't make cars, I don't really see how they would go about commercializing L3, even though it's well within their capacity from an innovation standpoint.
I really don't think they expected the auto industry to respond so aggressively with their own proprietary autonomous driving OSs, determined to shut Google out.
At this stage their only option is to get to six sigma level 4 in a way that leaves everyone else scratching their heads wondering how Google did that. It's going to take a lot longer than Google anticipated.
Now Google could probably release level 4 quite soon in a limited capacity that's more an amusement park ride than a relable, profitable transportation service, but it's probably smarter for them to keep their eye on the prize and keep plugging away at it. They're in too deep to throw in the towel.
The fact that they've said it doesn't mean they're right. But I think most of their statements have actually about how L3 is unsafe, rather than the fact that incremental progress is impossible.
Plenty of the startups in this space aren't making cars, but buying cars and installing aftermarket kits. Otto is a great example. Or they could have done what MobileEye did and sell L3 tech to car companies to fund further development.
I don't find the auto response particularly aggressive; they've waited 7 years to even start anything. The barrier to entry in that time has gotten so much lower due to deep learning that geohot can build a car that has some level of autonomy in his garage, and I think those advances were what really caught google flatfooted.
Can Google keep funding a pie in the sky research project for another 20 years, which is roughly the timeline they've said about L4 autonomy.
Ah, that makes sense. Still, I sort of expect raw sensor data to be very high volume, so Tesla engineers can probably deploy "queries" to the fleet for which data to upload, but not get the whole thing.
My suspicion is that they are way ahead of competitors in terms of data from testing self driving cars and realize there are still so many unsolved problems that true autonomous vehicles aren't going to happen nearly as soon as competitors claim.
For instance, I see self-driving Google cars on a daily basis here in central Austin. There are a ton of zero visibility turns where you have to slowly edge out and hope no one is coming because visibility is blocked by a hedge.
How can self-driving cars handle that scenario without delegating to manual intervention?
My question would be how can a human handle that scenario without computer assistance? You can always add a visual sensor near the front bumper of a self-driving car to safely let it peak around the corner without having to pull forward far enough to get hit.
How is it better to prevent the computer from safely see around the corner almost all the time only because mud will prevent it some of the time, but allow the human to continue driving while never being able to safely see around the corner?
I agree with your point, but I'm not sure about your example! Can't self-driving cars "see" around the blind corner much earlier than a human could due to the fact that they have sensors on the very front of the car, as opposed to our "sensors" (eyes) back behind the windshield? They might do better!
Not only the placement, but with radars [1] you can see through a bunch of objects like trees, wooden fences, and indeed also hedges. Of course making sense of the data in a completely new environment is hard. This is where fleet learning helps a lot.
There are a lot of things that are opaque to normal-band light that are transparent to other things, like infrared, ultraviolet etc (and vice versa). This goes for more than just light. It all depends on the wavelengths.
One should not assume that simply because a human can't see through it, doesn't mean a computer with the right kinds of sensors can't.
You are not familiar with the technology. The sensors can easily "see" beyond a hedge easier than a person could. Look into it you'll see that the sensors have a far wider range of perception than a human driving.
Get a human (or some other sensor) to look around the hedge? At some point, humans are going to become "just another sensor platform" for the car's autonomy to utilize. And then we can just work to replace human sensors with cheaper sensors.
As an Austin driver I can agree with the dangers of our roads but I think those zero visibility turns are solvable in exactly the manner you propose. Slowly edge out until a front corner sensor can look down the road. I'm sure there are an unlimited other factors I can't imagine so I understand it isn't as simple as that.
I agree with your main point. Google certainly has enough data to have encountered issues beyond the progress of the automakers advancing incrementally. However, I would not be surprised if Tesla's approach is also yielding advances unavailable to Google. I would love to see their results combined at some point.
Discussing your specific example, I wonder if autonomous vehicles might learn to avoid blinds given enough crash data.
Another thought: because all this data is collected and analyzed it will be easier to automatically report this as a problem intersection to the city planners.
Granted this can happen now with people calling in but they will forget to do so, won't have images to show for backup, etc. Roads can be made safer this way.
Mercedes is working on that and it is partly available as a driving assistant called: Cross-Traffic Assist. But I assume there is still a lot of additional work to do.
Second, it will come when cars finally talk to each other, which will become mandatory in a few years.
I think these "attentive driver systems" like lane assist, auto braking, ect, are eventually going to get banned by the DOT because they seem to be causing people to over rely on them and drive distracted. They keep causing high profile fatal accidents, like the recent one where a guy was watching a movie with lane assist on and his car plowed right into a the back of a semi.
It's going to get to a point where they say it's all or nothing, we can't rely on people to stay attentive when the car is doing 90% of the driving.
1 crash is more than the expected number of crashes for that amount of highway driving, at least from the stats I've seen, and driving at full speed under a truck despite having more than enough time to break is a really nasty failure mode.
>Also it was a truck passing across the path of the vehicle
That is exactly the sort of situation self-driving cars are expected to be perfect, or at least superior, to humans at handling. If they're no better than a human driver or even a bit worse when edge-cases occur than what's the point?
People drive more safely on roads without guardrails, but that is not a good enough reason to ban guard rails. On the contrary, you've already seen Mercedes add these features to every one of their new cars sold, it's only a matter of time before all automakers do the same.
When people drive on roads with guard rails, they don't think "hey, I have more time to send this text because the rail will stop my car from going over the cliff."
Yeah, ever since they put a divider (the "Zipper") on the Golden Gate Bridge drivers have been speeding more [0], but it's still better than having some plastic sticks separating opposing lanes of traffic.
The statistics aren't in yet, and it's too soon to make a reasonable comparison.
That said, I think GP will be proven right in time. Risk compensation[0] is a well-documented phenomenon and this seems highly likely to trigger that sort of behaviour.
As the system gets incrementally better, paradoxically it could get less safe, because drivers will get more used to relying on it and stop paying attention.
One of the biggest mistakes you can make is to believe that the news is an accurate reflection of reality. The opposite is actually true, in the sense that what makes it to be high profile news is something unusual and surprising.
Various DOTs will likely examine before and after statistics, and consider the statistics compared to vehicles without assist systems.
They might get political pressure to consider high attention events instead of the statistics, but they are partly in the business of pushing back against that kind of pressure.
I wonder if Google is worried their competitors will actually "ruin" things by getting a bunch of people killed. Thus causing huge regulations or potetionally alienating the population to the point that nobody will trust autonomus cars anymore.
This is the same fear that many supporters of VR have; the risk of pointing people's impressions of VR from cheap, substandard experiences which induce nausea and frankly just aren't very interesting.
HTC/Valve and Occulus have produced a high quality product and then you have the trashy phoneVR which isn't even comparable. Is the prevalence of a substandard VR behind the recent slump in sales? It's fairly probable (with other factors).
Slump in sales might just mean that people who wanted to buy the device have bought it already and other people are not interested enough to buy it. There are many explanations to it, blaming phone VR is one of less probable ones I think.
It's an interesting comparison between Uber and Google. Both of them are L4 or bust, but while Google is developing in an ivory tower, Uber is directly in the fray.
This means that Uber can implement 'mostly there' driveless vehicles in a patchwork fashion before everything is perfect.
If Ubers engineers can master a circuit for driverless cars that runs from, say, the airport to downtown and back, but only when the weather is good and traffic is light, they can do it because they've got the human drivers available to pick up any slack in the service.
With Google, if they're only taking passengers partway to their destination or if the system is down 20% of the time, it's just going to piss people off.
Do you mean Tesla? Is quite pointless for Uber to have a car that almost drives itself. They are not car makers trying to sell a better product, they need to get rid of the driver to get any money out of it.
If you are going from the airport to downtown you don't want your car to stop working because it is starting to rain and wait until a human driver comes along.
I think that in that sense Uber is competing directly against Google and not, for example, Tesla. And it looks like Google's technology is more mature, even that it is still not good enough.
For Google is a long time investment, I think that Uber is quite wrong if they think that is a short time investment for them.
Tesla and Uber both have their respective advantages.
Tesla can sell cars with autopilot even when it doesn't work all the time, but they would have very hard time selling autopilot if it only worked on very specific cities or "tracks."
In contrast, Uber can start rolling out autopilot which works very reliably but only on a specific and well-understood "track." They can start phasing it into the market by dispatching autonomous vehicles only when you request a route that they're very confident on. In the medium term, they can dispense with having a driver on such routes at all.
One thing that Uber might try is a self-driving car "delivery" service. People might not be confident enough to ride in a self-driving car yet, but you could have an app deliver a car for you to drive to your destination (maybe w/ driver assist).
For Tesla, the incremental strategy is partial autonomy "anywhere." For Uber, it's probably full autonomy in specific locations. That's a huge advantage because it can seamlessly mesh with their existing app—they can simply start serving a percentage of Uber rides with autonomous vehicles.
Cynically, I think Google is realizing that they need an incremental adoption plan and that's why they're launching their own ridesharing platform.
There are lots of places where it doesn't rain very much.
They could also be running only when the weather forecast says that it will definitely not rain.
If there are problem locations where it is difficult to drive, simply don't run them in those locations!
Also, they could have self driving being opt in for a discount. If there are problems, well you signed HP for it. Don't want to deal with them, then don't take the 90 percent discount!
Indeed. Calling a car that requires constant human supervision with the ability to retake control in a few hundred miliseconds "autonomous" is misleading and practically fraudulent.
If it can't get from A to B without a human behind the wheel, it's not autonomous.
> If it can't get from A to B without a human behind the wheel, it's not autonomous.
How about if it can achieve this 99.9% of the time, but needs help very occasionally? How many nines do you need? Humans are not at 100% overall. There's no clear definition of autonomy.
It's hard to evaluate in terms of a percentage. Is that 99.9% of hours it's fine and the other 0.1% it will plow into something? That means that it will have an accident, possibly a serious one, one every 1,000 hrs. Which is probably about once a year for a lot of people.
Right. My point was that the parent poster said 'if it can't get from A to B then it's not autonomous', but the meaning of can/can't is rather complicated in practice.
I don't know how much Google is perfecting the technology, but I agree with the sentiment about the other car makers. There should be the equivalent of a "do no harm" first approach when AI technology crosses some threshold and goes into life-or-death territory. People who talk about how computers are safer drivers than humans in the aggregate still don't seem to ask questions about the split second decisions humans can take when they are about to crash (and in general, all the pattern matching instincts humans have in potential crash situations)
Sure, but there are those 30,000-50,000[0] times a year that humans get it wrong too.
Would you rather have surgery from a human with a 90% success rate but a 0.0001% of a heroic save, or a robot with a 98% success rate, with the bonus that the robot will only be getting better with more adoption.
Please provide a reference to your comment about where robots are used completely unsupervised in surgery so we know the comparison is similar. The marketing message from autonomous car makers is that you can not only take your hands off the wheel and your eyes off the road, but also that you can put your full attention to a completely different task.
Not really. Tesla has made it overwhelmingly clear that their plan is full autonomy in the entire fleet. Tesla is doing it a bit more brazenly, but they're also getting real data to run analytics / mathematical models over (the big in "big data") which will make their solution safer than Google's over the long term.
They'll also have the equiv of a waze killer when the model 3 hits and there are a lot more Tesla vehicles on the road. As it stands today, I am literally unable to come to work and back without seeing at least 2-3 Teslas (model S or X) on the road in Chicago. Tesla is on their way to be a bit of a creepy monopolist, but I'd rather them do it than say General Motors.
Safer how? They are missing cameras and sensors, this won't be a simple software upgrade.
I think people are handwaving away the true difficulty of door-to-door (as opposed to lane-keeping) driver-less cars that can operate safely without humans being attentive all the time.
They have cameras and radar currently, I'm not sure what you're referring to?
I'm pretty sure anyone sane realizes that the current production shipping autopilot hardware is not capable of fully autonomous driving ie: 100% humanless. I also think that no one is handwaving anything as it is a problem that as of yet, has never been solved. However, data is an enabler for this and having more of it means Tesla is uniquely positioned to crack that nut before anyone else. Teslas are collecting data even when AP isn't enabled. That doesn't mean the cameras are off or the radar is off (they aren't). This simple fact gives Tesla a leg up over virtually all of the competition and until Uber has the same or superior tech on all of the uber fleet, they'll be lagging.
So... When the tech is capable of full autonomy, Tesla will be sitting on an absolute treasure trove of fleet data, which they'll be able to exploit.
But are they collecting the right data? Google's cars are collecting hyper-accurate road data, their radar has longer range, and LIDAR coverage is 360 degrees. If you look at the sensor map of the Model S, it has gaps in its field of view. Is a Model S making a left turn at a crowded intersection with limited field of view, going to see the pedestrian behind the bushes that's about to step out?
Human drivers have actively-pointed stereo vision sensors, IMU, and haptic feedback from the steering wheel and pedals. That's it. We know it can be done with only those sensors.
And those sensors require software (our brains) far surpassing anything man-made. Until we have the software capable of doing what the brain does, then we'll need better sensors to pick up the slack.
We know it can be done unsafely, to the tune of 30,000 deaths on American highways every year. We want cars to have super human sensor capability, otherwise, there's no point.
I think the criticism is real, but also that we market the pre-stage 4 technologies inappropriately by implementing them as binary "completely hands off, else emergency" systems. Integrated in a way that is more assisted, keeping the driver in control of the overall decisions but making some cautionary adjustments to implementation of those decisions to adjust speed or direction to stay in lane, avoid excessive tailgating, allow merges, etc., they could still be powerful tools to reduce some number of that death toll.
I agree, I would keep it is strictly a "backseat driver" assistant, looking to intervene when you're stupid, but the way Tesla is marketing this feature is dangerous. The software should not let you take your hands off the wheel and offer no "off switch" to get access to hidden 'full autopilot' mode, because people's lives are gated by the riskiest, dumbest drivers. This is one issue where being "open" with respect to the system software is bad, and being more "Apple"-like in imposing restrictions is for the greater good.
We know that the best driver in the world uses that sensor suite, and the worst. If every driver drove like the best human driver, never got tired or drunk, is that pointless?
I don't see how there can be more Tesla drivers than Waze drivers. The model 3, while cheaper than the X and S is still an expensive car compared to all the Civics, Sentras, Elantras, Focus, Accent, on the road today.
More no, but much better data. Fundamentally, the radar data's point cloud + the camera info on every single Tesla is much higher resolution data than Waze's phone gps data.
Cars have always been coevolutionary with infrastructure and, hell, society as a whole.
Why solve a problem for a point in time and its assumptions... the hard way?
By adopting adaptive and coevolutionary timelines, self-driving cars and a society that uses them can "organically" (sorry for the buzzword) adopt them.
And ignores multiple levels of useful intermediate achievements:
stage 1: self-driving on superhighways for the long haul: a wayyyy easier task with much more controlled information, markings, protocols, risks, etc. And with massive payoffs in consumer and business transportation.
stage 2: more extreme weather events, highways besides superhighways, handle commuting and congestion
after that: progressively more local streets and conditions.
This same argument could be extended to say that auto makers are ahead of Google. Correct me if I'm wrong, but my impression of the Google car was that it relied more on accurate maps, while the auto makers had better sensing hardware.
Well, Google technically has "better sensing hardware" in that it has "more expensive sensing hardware". George Hotz estimated the cost of the sensors on a Google car at about $150,000. Obviously, every Tesla doesn't have that.
But you are correct in that other companies design their cars to drive themselves, and Google's just making an overglorified Google Maps client with collision detection. And the reality that maps will never be accurate actually renders Google's methodology a complete nonstarter for ever being a real product.
Gradually introducing self-driving features lets the competition gradually gain consumer trust and confidence in the technology, as well as giving them more data for future improvements, and it also gives them a revenue stream to fund future development.
I have always been of the opinion that the best way to build high quality software is to build small increments and test them in the field as early as possible.
To me it seems so hard as to be almost impossible to jump directly from no computer assistance to fully autonomous driving, without first moving through (and learning from) all of the semi-autonomous steps in between.
Google’s self-driving car project team is also having a brain drain when a handful of high profile engineers jumping out of the boat. This to me is a bigger issue contributing to the "losing out" scenario.
Google/Alphabet's corporate problem is that they have no clue about how to sell anything except ads. Even when Google sells something in a B2B context, Google expects to control the terms and offer the customer little recourse.[1]
Automotive doesn't work like that. GM, for example, insists that if a supplier's part causes a recall, the supplier is responsible for the costs of the recall.[2] Auto manufacturers demand the right to send their inspectors into a supplier's plant. If supplier quality slips, the oversight becomes tougher. It has to be that way; a car has thousands of suppliers, many of whom can make the final product defective.
The new head of Google's self-driving project is a car guy from Hyundai. He understands this. Google now has a center in Novi, MI where 100 self-driving mini-vans are being built in a joint venture with Fiat Chrysler. That's step 1.
This is more of a management ego problem than a technical problem. Selling self-driving technology as an auto parts supplier doesn't fit with the "change the world" mentality. The automaker gets all the credit. Few people know the names of the major Tier I automotive suppliers. Microsoft ran into this once. Microsoft wanted the Microsoft logo to appear when the dashboard display booted up. They were insistent about this. The automaker involved went with QNX instead.
> Microsoft wanted the Microsoft logo to appear when the dashboard display booted up. They were insistent about this. The automaker involved went with QNX instead.
Of course, it's a bit more complicated than that, but the story 2 years ago is very interesting nonetheless:
I'm excited about the idea, but the implementation so far is more than underwhelming. While power cells are much more advanced, but I don't write off any of those. I can see the appeal of electric cars though, as the infrastructure would be much cheaper than hydrogen.
I was just reacting to the "Manufacturing is hard/messy" - no doubt, but Apple has been orchestrating manufacturing at a huge scale to exacting tolerances for a while now.
Google does software (plus Glass, Nest, Chromebooks, ok, Nexus, etc., maybe, but mostly software...), while Apple has been been doing hardware and software all along.
The problem with Apple is that they sell high-end products with a luxury/status component. The problem with that is that fewer and fewer people actually want to own a car. Transportation is becoming a service, as opposed to a physical product.
The automotive world and software engineering are very far apart. There are very few people who are truly at home in both. What Apple does is consumer grade, not safety critical. Let's just take EMC. I'm sure Apple does think about that, but that is nothing compared what you have to do for a car.
> Google’s project started in 2009, long before carmakers and most other companies seriously considered the technology.
Always wondering who writes up that shit. Car makers are way long before in autonomous driving. It is a huge difference in "just doing things and not talking about as long as technology is not ready" vs. "talking about technology and presenting even unfinished products." Especially Tesla is very big in the later once.
Car makers are in self driving since technology went into cars. Without that research we would not had any driving assistants these days. I remember mid 1990s when I was an intern at then DASA (Daimler Benz Aerospace) and they worked with a self-driving S-class powered by a new chipset developed by then TEMIC (now Continental) based on Radar technology. Today we have this technology in any decent new car as part of the adaptive cruise control. Back then the test drives where made on a airport runway.
It's pretty interesting how little the tech press seems to know about the efforts of the established car manufacturers.
> Google’s project started in 2009, long before carmakers and most other companies seriously considered the technology.
Luxury cars (Audis / Mercedes / Jaguars / BMWs) have had semi-autonomous options for at least 10 years with lane-keeping, adaptive cruise control, emergency braking, etc. They've been working on the technology for decades. Improvements in the accuracy and cost of sensors along with processing power makes this a much easier problem now but to say they hadn't considered this before is so silly.
I think what most people are missing is that self driving cars are not a disruptive innovation. All the major car markers are going to get to fully self driving cars through incremental improvements and they'll get their faster than Google because they already have actual customers who are willing to pay more for the technology. Tesla/Mercedes/etc. don't have to worry about this project being shelved or not getting the attention it deserves because of a lack of a path to profitability.
Just because Google talks big doesn't mean the other car makers are stupid idiots who can't see a technology trend coming from a mile away.
It looks kind of (extremely) ridiculous now, but think about all the things that changed between then and now. The iPhone is ubiquitous now, but it only came out in 2007, and we really only got modern cellular internet after that.
I think that the Smartphone revolution had a huge impact on the way most people think, to the point where it is hard to remember what life was like before them.
The autonomous, decentralized nature of smartphones, along with the cloud for updates and backup is beyond game changing, and software designed before this REALLY shows its age.
DISCLAIMER: I work for GM, any opinions are solely my own, etc.
Branding the special lane as the 'High Speed Safety Lane' is brilliant. Only autopilot/lane keeping/autobraking enabled cars allowed. Increase the speed limit since you know that all of the vehicles are able to prevent accidents in the majority of cases.
Wow. I love the retro looks at the future. They nearly always sheer mass of people and scalability wrong, imagine a human road traffic controller working with each car over radio! They applied how planes work to the road but still the whole get onto a seperate lane, follow the red line thing would work very well for interstate haulage.
I didn't know that. It shows for how long that is a vision and how long it took. Still, the vision is far away from happening. (But may be not an additional 60 years.)
This may blow your mind: in the 1930s GM owned some aircraft manufacturing and investigated flying cars. Spoiler Alert: that never worked out, as you wind up with a crappy car AND a crappy airplane.
My grandfather was a radar engineer with the now defunct silicon valley company Varian Associates. He claims that they had contracts with automakers interested in radar controlled braking as early as the mid seventies. Funny to think that today's headline making technology was being worked on (in a very different way) two generations ago.
Though I'm probably wrong, it'd be nice to consider the key phrase in that sentence is "...before (they) seriously considered the technology". While automakers and others have been testing the technology for years, Google was probably the first to drop huge $$ into the effort, with what we can assume was aimed at bringing it to mass-market / production-level. So, yeah they probably were the first to "seriously consider" the idea of mass-market autonomous vehicles.
Likewise, despite Apple's other faults, bringing quality touchscreen technology to high volume mobile devices back in 2007 is the what made that company stand out from those who preceded them in that endeavor.
Three DARPA contests jumpstarted the truly driverless car. No one but the military seriously interested. The first contest was a joke with almost no one making it more that partway. But got pretty good by the third one. Googlr boughtnthe Stanford team. Like the Internet, this was another federal funding success, when private industry lacked the imagination.
Actually it will begin any day now [1], however they failed to mention that there will be a driver behind the wheel -- i.e. doing what google has been doing for years already, but also picking up passengers. Seems to me it's mainly a PR move...
Best to think of Volvo cars as somewhere between Tesla autopilot and Google self-driving. There was an HN comment somewhere else talking about how Volvo self driving cars don't require hands on the wheel, and have ~12x the sensors that a Tesla does.
Source: I live in Pittsburgh, have seen the self-driving cars on the road, and friends have driven in then. I unfortunately, have not lucked into a ride in one yet.
Not sure how you can be losing at this point. Self-driving cars are far from a reality. The claim that they need a "sales force" now when they don't yet have a product is disturbing. The Uber experiment in Pittsburgh has made for some outstanding PR, but let's see how it actually goes before we praise them for solving the problem. You can bet they are minimizing as many variables as they can (only operating in perfect weather, and only along routes with no construction and very well painted lines that has been laser-mapped to the centimeter).
Fact is, it's extremely unlikely we'll see self-driving cars outside the most constricted of environments for decades yet. This should be obvious to anyone who (1) has driven and (2) works with computers. It's scary to me that everyone is eating up the hype and pretending they're imminent.
As someone who lives in Pittsburgh - and likes the city quite a bit - I can attest to the fact that: (1) the weather is rarely perfect, (2) the entire city is under perpetual construction, and (3) the lines have not been painted in years. It took me quite some time to comfortably drive in the city because to the narrow streets, poor road maintenance, and bizarre traffic patterns induced by the many bridges and tunnels.
It will indeed be interesting to see how the well the Uber deployment goes in practice. They certainly did not choose an easy place to start.
Google is so perplexing. I hoped their green-field research divisions would become drivers of innovation in the mold of Xerox PARC, Bell Labs, etc. They had the resources and prestige to acquire the world's top talent. All we've got are the world's dorkiest glasses. Where did things go wrong?
Thiel argues that monopoly is a necessary condition of innovation, and Google can qualify as a monopoly under any reasonable measure. But it's starting to feel like PARC and Bell Labs were an accident of history; both were perfectly positioned at the cusp of the digital computer revolution. If we're approaching a similar technical revolution, it's not clear at all. Machine learning, chat bots, drones, ride-hailing apps, etc., are great, but we're in the midst of evolutionary -- not revolutionary -- progress. That's fine. It's the normal state of affairs, progress marching slowly on.
Whether Google wants to do long-term green-field research or it wants to incubate new businesses, it's not clear. From the outside, it looks like a bit of an identity crisis.
If you look at PARC and Bell Labs, the innovations from those teams didn't necessarily become ubiquitous while within those companies. They started there, but left before they found their market and adoption success. I get the feeling we may see that with Google's green field projects as well. Google R&D may identify important areas of the future but be unable to monetize and grow them in ways that make sense in the current business.
Otto (and Uber's acquisition of it) might end up being an example of that in the self-driving car side. Google incubates that vision, unable or unwilling to release it as the founders of Otto wanted; they leave and eventually find commercial success elsewhere.
Get everyone salivating for glorious, apocolyptic disruption, scare the crap out of the traditional auto industry, incite a mad scramble that mobilizes billions of dollars and thousands of men, and then turn around and say 'well, actually... this is really hard, guys.'
>The highly parallel computer architecture was eventually surpassed in speed by less specialized hardware (for example, Sun workstations and Intel x86 machines).
Strangely enough, a decade or so later Sony tried an exotic architecture with its PS3 only to go back to a generalist x86/x64 architecture with the PS4. Funny how the top-down approaches fail compared to bottom up 'kitchen sink' approaches almost consistently and how so many countries and companies never learn this lesson. The West didn't panic from what I can tell. AMD and Intel kept raising transistor density and clock speeds and kept giving their customers what they wanted via a competitive market approach.
Was that because the PS3 architecture was a failure? Or was it because many game developers are interested in cross-platform releases (many consoles and PC), and having an oddball architecture makes that harder for your platform to get those releases?
On the one hand, the PS3's weird architecture made porting older PC game engines (like Bethesda's Gamebryo, which drives Skyrim and Fallout 3+) extremely difficult, and much-anticipated Skyrim DLCs were long-delayed by problems that existed on the PS3 but no other platform Gamebryo supports.
On the other hand, once Xbox 360 games started getting ported to PS3 and the graphics were compared across the two systems, the consensus was that the PS3 really wasn't much more graphically powerful at all. There was one sports game in particular (boxing? I can't recall-- it's been too many years) that was a particularly telling case because the PS3 port had had something like 6-8 months of extra optimization work, and despite that the launch-day Xbox 360 version of the same game looked and played better.
"Failure" is strong; obviously the PS3 wasn't a failure as a gaming system. But its weird architecture certainly didn't deliver the benefits Sony claimed it would, and it definitely made things more difficult on game developers.
The 2 x 4 = 8 core PS4 CPU vs. the 1 main + 8 mini core PS3 is not exactly a large leap back. Cross platform games and a much improved CPU's pushed Sony to a more main stream product, but you can also see the PS3 as an arguable precursor to modern systems.
The basic problem is it's hard to find tasks that scale out past ~8 cores really well yet can't just go to the GPU.
Those ESPRIT projects I worked on tended to have very nice Sun workstations as their standard machines - when we maxed out the RAM on a 4/330 someone actually flew from London to Edinburgh just to fit it!
Not sure if the research was worth the money but we certainly did eat well.
Modular phone project abandoned a few days ago. I really hoped it would bring to the end stupid era of endless consumption where you have to replace your whole phone just to get better camera or remove micro-jack.
I called that shot the first time I read about it. There are zero successful modular consumer electronics products. The near-trivial analog connectivity between stereo components is as close as modularity has ever come in that business, and that's really an example of how, at best, modularity attracts customers that resemble audiophiles (and wannabes), which may not be an ideal target market.
Desktop computers are still pretty popular. Especially among PC gamers. That is the only example I can think of to refute your comment.
Edit: but it only works because of the space in a PC tower. It doesn't work for laptops, and I was sort of in the same boat as you. I couldn't imagine it would work for phones. Specifically that they are so space constrained.
Yes, they are modular. But the reason laptops took over the mainstream PC market is that modularity had an incremental cost, and was VERY seldom used, and when it was used, it caused problems that are relatively expensive compared to the cost of the PC. It was easier to make a range of non-modular products to cover any variability in needs.
Now modularity in PCs is the domain of gaming PCs, plus a few speciality niches. Gamers as a market could be seen as analogous to audiophiles: a bit cranky and hard to please.
You can download firmware updates for lenses as well as camera bodies, and with MFT there's even an open standard for interchangeable lenses/bodies across manufacturers.
That modularity is a pain in the ass that only photo snobs (like me) will put up with. For example: the RAW quality on my Sony a6000 was crap until I checked the firmware version, and it was several versions behind. Doing the update appeared to brick the camera, and the updating software is a nightmare. Solving the bricking problem took a non-trivial effort. Replace "camera" with "GPU card" and you get a typical story about modular PC problems.
One reason to put up with this crap is that you can save thousands of dollars buying nice old glass for your nice new camera. But even though the glass lasts forever, once you get 50MP sensors in cameras, old glass that was designed for the best resolution of 35mm film will limit the performance of a 50MP+ image sensor. Soon, photography will be mostly computational, and big old lenses will all be museum pieces.
I don't even want to think about whether I've kept the firmware on my Sony E-mount lenses up to date.
Google Books kicked off to a really nice start, but seems to be languishing. OTOH, much of the accessible out-of-copyright material has been scanned, and a significant amount of in-copyright material (books and articles) is being provided ... by other means. Still, the Google Ngram Viewer tops out at 2008, which is increasingly irksome.
A great many of Google's social products: Orkut, Buzz, and Wave.
Whatever those barges were supposed to have been used for.
It's more that they've got a whole grab bag of pending technologies that may or may not allow them to replicate the kind of success they've had with their search engine.
It keeps potential disruptees twitchy because they're all cognizant that they could be the next Blockbuster if they don't respond aggressively.
Every single product that Google has ever announced.
Google has never had a successful original product. It has only been successful in slight refinements (or in the case of Android, just changes - not refinements) within already successful product categories. To be really harsh; if success means profit, it's only ever been successful with ads.
My wife and I now refer to this phenomenon as "getting oculused". When you come up with an amazing moonshot project, make a ton of amazing progress, and then, from an apparent outside perspective, completely stop making progress, at which point your competitors start from scratch, surpass you, and get to market first with something that has more features.
Are there other big examples in history of this phenomenon? When a company announces a lot of genuinely pioneering progress at something, and then is immediately beaten to market by a competitor who was paying close attention and moving more nimbly?
This is because the statement is utterly deceptive. They're comparing miles driven with an uber app being on vs google's autonomous car driving miles.
It would've been a much better comparison to compare uber app miles with waze or google maps. They probably don't because that comparison would not fit with their goal of attacking google.
Am I the only one that it kinda bothers that this is a race to be the first one to market? In something like this, I'd rather wait a little bit and wait to see who has the best product rather than the first product.
I think it's more differentiated than that. There were big efforts to have convoys of trucks drive themselves on freeways many years ago. Freeway driving seems pretty solved and save at this point and probably more reliable than human drives. So not realising that does no good. Driving on icy country roads with pedestrians walking on the side is a totally different story. So I think we need to look at this by functionality. We will only learn sooner by releasing mature functionality sooner and can fund research toward more automation. I don't think this is fundamentally different than releasing a web site with all the features you could want on day one. Granted it's like a website handling incredibly important operations that you don't want to go wrong. But that's similar to banking websites, health care applications etc. We didn't insist surgery robots to be a full surgery replacement day one.
From a strategy perspective it always seems like you develop better with a clear incremental goal, even when the increments are huge. In that sense, Tesla has a clear goal to incrementally improve its cars. Uber has a clear goal to augment its service. Next steps for Google? Apple? I'm not surprised they are drifting.
I'm going to say something that will going to downvote me into oblivion ... but I think self-driving tech needs to be given away for free. This is not unprecedented. Volvo gave away the seat belt. Edward Jenner gave away the smallpox vaccine. More recently, Toyota is "giving away" their patents with respect to hydrogen fuel cells and Tesla is doing this for batteries.
How many people die of traffic fatalities? We are so close to a world where such deaths are the exception rather than a commonplace occurrence. If only we had truly bold individuals at the helm.
This is not something I expect to read in a Google story:
"Tesla, with thousands of internet-connected cars on the road, has a similar data advantage, one former member of the Google car project said."
But it is absolutely the case that data here trumps brilliance. Its really too bad that Google isn't still partnering with Uber rather than trying to compete with them.
I think this just show that these tech companies hoarding 100s of Billions dollars so they can defeat the next threat is just a complete waste and would be much better in the hands of investors as dividends.
This is like claiming Amazon is building rockets better than spacex. Ones rockets go up and down , the others actually put things in orbit. Not the same, though they both use rockets.
If they wanted to Google could probably pick a few very low risk streets or neighborhoods, distribute an app that makes you waive liability, and provide a self driving taxi service with no driver during good weather. The technology is much safer and more advanced than competitors.
I think it boils down to actually getting rid of the driver entirely means it had to handle 1 in 10000 or 1 in 100000 situations. It also means there is no person to blame in the event of an accidents, which are impossible to avoid 100% regardless of how good the system is, due to the laws of physics. None of that would be a problem if we had a tech-savvy and tech-friendly, carefully considering public. But we don't have that really. When it comes down to it, many people depend on cars for a living and almost everyone has accepted them as a daily part of life. Google knows that any collisions could result in a massive angry mob out to get them.
I actually hope articles like this one can gode them into letting us use their system and if it does the occassional Google-had-a-collision mobs turn out to be short lived and relatively tame.
Fun fact: There's no such thing as "waiving liablity" in the US. Those papers you sign are nothing more than a placebo deterrence. Companies can be, and are, sued regardless every day.
> Fun fact: There's no such thing as "waiving liablity" in the US.
That's technically true (you disclaim liability or waive the right to recover for particular torts), but substantively false.
> Companies can be, and are, sued regardless every day.
There are limits on what liability can be signed away, and certainly many go beyond that, and certainly there are disputes over whether the requirements for others to be effect are met which results in lawsuits where that is a threshold issue.
But it is not the case that signing away the right to collect on certain claims generally has no legal effect in the US, even though it might not always have the full effect it would seem from the text viewed in isolation.
You can wait your right to sue in a court of law. You will instead go to a sham court where the "judge" is only paid if the company still wants them to be an arbiter and the results are secret.
You may want to look up "mandatory binding arbitration" for its ability to stop lawsuits. You may also note the Supreme Court of the United States of America has already decided that no meeting of the minds was necessary to lose your right to the courts.
Google likes data... I wonder if they're considering giving Chauffeur away to automakers, contingent on building in the ability to gather data from it.
So google's falling behind in self-driving car development because other companies have scaled back their expectations and are now focusing on vehicles that can't actually drive themselves?
We seriously need new words to seperate "self-driving car" from "vehicle with driving assist systems". Something the marketers can't fuck up by diluting it. I suggest "Fully autonomous vehicle".
This is like saying ITER has been overtaken in fusion power development because Dubai just installed a large solar collector plant that generates quite a lot of power. It's a ridiculous comparison and shifting of the goalposts.
There is a scale that's been defined but getting news and articles to use them appropriately is an uphill battle. It'll help when it becomes a more common thing so more people get used to the difference.
I'd be happy to be proven wrong but I fully expect it will be decades before I could order a self-driving car with no driver aboard in Manhattan or Boston. It's certainly possible that fully autonomous driving systems under some subset of conditions like limited access highways will be available sooner--maybe much sooner--but, of course, that doesn't enable the use cases that excites people who don't want to own cars etc.
I agree with you. However, I do think it _is_ possible to have it sooner. However, not without a massive investment and effort from many different players working together. Government involvement, etc.
That's probably fair. One the one hand, nine women, one baby etc. On the other hand, if there were widespread adoption of standards, infrastructure were modified to smooth over some of the hard bits (e.g. transponders in construction zones), possibly remote operators setup for if cars get stuck, etc. adoption could be accelerated at least somewhat. I still think the generalized door-to-door driverless experience is a lot harder than many are giving it credit for though.
The government involvement in a positive direction will probably be somewhat tough. It's not hard to frame this as a job-destroying technology initially primarily for the benefit of the better off.
The average lifespan of a car in California is 19 years.
Let X = # of years before true class 4 self driving cars are available to buy
Let Y = # of years after introduction before class 4 self driving cars are the majority sold
Let Z = # of years after Y where self driving cars provide the majority of trips.
Z could be 19/2, but we can probably assume that newer cars are used for more miles than the older ones, so Z is probably somewhere between 5 and 10.
That could mean that X+Y is as little as ten years away. X=5 and Y=5 both seem fairly optimistic to me.
The important part of that phrase is "most trips." It's going to be a while before cars are cheap enough that most people will have a car with them and even once it starts appearing in cheaper (10-20k USD) cars it will still take a while since people generally use their cars for a good long time before replacing them.
Google has no intention to beat its competitors in the self-driving cars space, and I suspect it has never actually wanted to produce such cars for mass use.
What it does want is to make self-driving cars widely used, because that would mean that the time currently wasted in commuting by car could turn into increased internet usage, which drives Google's revenue. That's why Google was an early investor in Uber. Its efforts in building self-driving technology are intended as a catalyst for the industry, in which case it seem to have been a success.
If Google wanted to increase internet usage, instead of self driving cars it would invest more in decent remote working technology. Not needing to drive beats laptoping in your car.
But I think Google imagines itself as an "AI company", and self driving cars are just one of the goalposts of AI. I have to admit, I am frustrated by the secrecy of their development process, years passed and nothing of essence was announced about their progress.
The idea that Google has poured billions into a decade-long project to solve an extremely hard technology problem in order to lure people into marginally more internet use that will help Facebook as much as Google? When, if that technology problem is solved by Google, there is genuinely a big business in it?
I understand Tesla's rush to market as it wants to sell cars to make money to feed its research/work. So it needs the marketing gimmick/advantage to sell more cars.
But I don't get what Uber is getting by using partial autonomous vehicles. Their only goal is to replace its fleet with fully autonomous cars. To that end, outside of getting data I don't see anything else. By that logic I would be really surprised if they expanded their partial autonomous offering beyond what they have today.
It makes sense for Uber because it has unique legal issues. Every PR win and deal with a municipality decreases its headwinds for the next deal it makes with a city. Also it gives Uber an opportunity to debug the experience before actually releasing a fully autonomous vehicle.
This service is different from the Tesla autopilot feature since it is aspirationally an autonomous vehicle. In Uber's case, it is expected the driver will have to actually take over in some corner case situations.
The two main ways for Google to commercialize the technology are by including Chauffeur,
the name for its self-driving software, in cars made in high volume by existing auto manufacturers ...
Is Chauffeur a generic name for driving log software or it's just so incidental that it happens to be same name Comma.ai picked for their own driving log app? [1]
A big issue with Autonomous tech is liability. I mean with a human you have a person you can blame. With a computer how will liability be handled? Is the companies fault. Is it owner of the car.
It seems like the ideal endgame for something like this for google is patent licensing, in which case they don't actually need to produce a product themselves.
Google wants car-bots to roam the streets and crawl the real world, much as they did in the digital space. They just need a flow of automated cars to blend into.
We all know how technology in Formula One cars improves road cars - why don't they (Google, Tesla, Apple etc ) compete on the race track ? - They could take baby steps first and compete in the time trials.
It might even bring a whole new audience to the sport.
I love F1, but driving on a closed track (or highways, for what matters) is many orders of magnitude easier than dealing with traffic, people, bikes, other drivers, etc. BMW created this many years ago, and the car was better than most drivers already. Advancing track days doesn't really help much on bringing cars to cities.
There's not a lot of space between that and full self driving on highways and surface streets really. You get rid of a few sign reading needs but your car still has to have the full suite of sensors and compute power to monitor its surroundings.
Bloody hell... when are journalists going to learn the difference between a car that doesn't always require a driver's hands on its steering wheel, and a car that doesn't require a driver. It's as fundamental as the difference between a carriage that has a extra-cool harness for the horse, and a carriage that doesn't have a horse.
Feedback from users, a measure of how comfortable people are with the idea, and if people are OK with it, then spreading the idea of driverless cars being safe and normal (which I consider differently to PR, but feel free to ignore it).
You also ensure that you're taking the same type and length of trips at the same time of day as you otherwise would be.
You now also don't need an extra car. Instead of one car taking passengers and one gathering data, you only need a single car doing both.
So it's cheaper and you get more information, as well as more accurate information from it.
If you look around the internet at objections to self-driving cars, many of those objections are based on fear. So finding out if self driving cars actually scare people is probably a good thing. Even if they don't outright scare people, the behavior of a self-driving car will be different, and maybe some of those differences are unpleasant for riders.
Telsa is still a niche car company, and despite their PR their autonomous tech isn't really better than other luxury car companies. In fact it might be worse and Tesla is more willing to just beta test in production.
But I doubt either company wants to actually deal with automotive manufacturing and sales. They'd be better off just being a supplier to car companies. You'd go buy a Ford Fusion with the Google Self Driving package, or something similar.
There were rumors Apple would actually build cars, but I think they were either wrong or Apple gave up. It's not that profitable of an industry.
Google has no interest in actually building cars. Google has an interest to collect the data and for such sell (or even may giving it for free) the software collecting the data.
Google is a data driven company and not a hardware manufacturer. Even that Google is building their own hardware in regards of server, server racks, and such think. But all that is only to support their data driven model.
The problem here is, that car makers have no interest to buying this software from Google and as such selling the data to Google. They want to keep the data to them-self.