They mention that they have stuck in Chrysler's Pacifica minivan (but this from a previous partnership agreement) and there isn't anyone else participating. And Krafcik says again and again things like "potential", "some day", and "we can imagine". He mentions their LIDAR tech can tell which way a pedestrian is facing.
But since there are no prices, there are no products, there are no actual announcements, it seems to boil down to Google feeling the heat as the company that used to be associated with the notion of bring Self Driving cars to the market. I wonder if they have been doing consumer surveys on what people think about Self Driving cars and finding out that Google is rapidly dropping from the radar of most people.
They don't make cars, they can't get people to partner with them, and they haven't been successful at showing meaningful progress. Meanwhile in the bay area you can't drive down any freeway and not see some chortling Tesla owner talking to their friends while the car moves them along in rush hour traffic.
For me the interesting thing is that Tesla has spent perhaps 5B$ on developing a self driving Model S between 2011 and 2016. And in that same time period Google went from $45B of cash on hand to $83B by Q3 of 2016, so they picked up and literally sat on nearly $40B over the last 5 years. Guess what? Cash sitting in the bank doesn't invent things, it doesn't build things, it doesn't "change the world" and it doesn't make you a leader. Can you imagine where they would be if they had used $10B of that to build a competitive electric car company?
The Chevy Bolt is finally shipping in volume. Deliveries started in late December. An autonomous version is scheduled for next year. (It involves Cruise Automation technology, which is worrisome.) Tesla now has a competitor that is building electric cars in volume. The Chevrolet Division of General Motors is good at making large numbers of cars.
Michigan now has a self-driving car law. It's claimed to be the least restrictive, but it's very similar to California's. As with California, the manufacturer is totally responsible for anything that goes wrong while in automated mode. As usual, Uber is complaining about this.
And then here in January we have this content free press release.
I'm a big fan of figuring out self driving cars. As the evolution of self driving cars unfolds, I become convinced that Google (or a subsidiary via Alphabet) is going to end up being much of a player in the space.
>I become convinced that Google (or a subsidiary via Alphabet) is going to end up being much of a player in the space.
Source for delivery
edit Opel also has a German preview page up on their website: http://www.opel.de/fahrzeuge/neuheiten-uebersicht/opel-amper...
For now, the guvern program to subsidy the buy of an electric car in Germany is a big flop. Not because the money wasn't good, but we're used to jump in our Diesel Audi|BMW|Merz and drive 1000km until the next fill-up.
If all cars would be electric, the economy would stand still, while loading batteries. So, no.
I would fight this legislation every step of the way, even if I had to liquidate all of my TSLA holdings to do so. You (legacy automakers) want to not take EVs seriously until someone spent the last decade making them practical and desirable, and then try to use legislation to catch up? Tough.
> If we clear a path to the creation of compelling electric vehicles, but then lay intellectual property landmines behind us to inhibit others, we are acting in a manner contrary to that goal. Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.
Tesla is a member of the new CCS fast charging consortium (which supersedes the current "fast" charging CHAdeMO standard) , but they're the only ones still building out their manufacturer charging system network.
The Supercharging Network isn't as big a deal as you're letting on. A standard will be achieved, fuelling stations will install them (they own a lot of real estate) and the minimal advantage this is now providing Tesla buyers will disappear. A charger network is not a moat.
How is a Supercharging station different from a fuelling station?
Other EV users won't go to a "Tesla SuperCharging Station", they'll go to a "BP FastCharge" or "Exxon MegaCharge" station (or some entrepreneurial EV charging outfit).
My point was, the Supercharging Network isn't a big advantage.
That might want to be an area where you "pick your battles," and save your energy for something else.
Building the car itself, creating a ridesharing network, finding parters, swaying consumer perception and even regulatory challenges are going to be walk in park in comparison.
Google/Waymo is well positioned to be the first to solve this problem, since they have been working on this for a decade and because Google has a big advantage in Machine Learning and Computer Vision research. On the other hand Tesla already has cars on the road collecting data.
but not lidar data, which is why I agree with your statement that google is well-positioned. lidar is expensive now, but I expect self driving cars to be the reason that changes.
Cameras give you rich intensity information: you can easily identify road markings, signs, etc provided the scene is well illuminated. Depth from stereo is OK, but struggles if the scene is featureless. Cameras perform about as well as your eyes in inclement weather. The idea that cameras don't work at night isn't entirely true, because your car should have headlights.
LIDAR is used for robust distance measurement. You can make spot measurements on pretty much any non-specular (shiny) surface. It's active so it works without external illumination (so you get 360 3D vision at night, not just where your lights point), it's accurate enough for driving (cm-level at 10s of metres) and by paralleling sensors you can get realtime performance. The Velodyne system uses 64 rx/tx pairs for ~1mpt/s. In practice you get around 20k LIDAR points per camera image because those 1 million points are spread over a hemisphere and your camera is imaging at 30 or 60 fps.
You can do weird tricks with RADAR though, like the two-car lookahead thing that Tesla has implemented. You can do that with LIDAR if you're looking at a mirror (you'll measure the distance to thing in the mirror), but typically multipath returns don't have a high enough SNR to be useful.
If a car can self-drive anywhere, but will refuse to take control in an empty street covered in snow, I guess that's not a fatal flaw.
It doesn't necessarily need to be absolutely featureless: more specifically stereo matching suffers when there is local texture that is not sufficiently unique along an epipolar line (normally we use rectified images so epipolar = along the image rows). For a concrete example, if I showed you (or a computer) a small patch of road (< 15x15 px square) and told you to find its location in another image, you would struggle because of the ambiguity. This happens all the time 'in the wild'. Cars are shiny, which means specular reflections everywhere; global illumination differences are fine, but local differences cause problems. Matching surfaces like glass is also hard. Someone else mentioned the sides of artic lorries.
LIDAR avoids a lot of potential confusion, but I'm not suggesting that it's a catch-all. It's time consuming to scan and the data are sparse. The best systems (should) fuse data from all the different sources to maximise confidence.
You won't get by on LIDAR alone though. You'll need cameras for stuff like identifying road lines and signage.
I don't understand why the full monty has to be in version 1.0. Give me a good automation on the low hanging fruit, and let convergent evolution of society, infrastructure, and technology iterate to the more difficult use cases.
As automation becomes better humans will be paying less and less attention to the road, making this technology somewhat dangerous in the interim. There has already been a fatal accident involving a Tesla on autopilot where the driver was watching Harry Potter.
* Predictable termination. The user must affirm that they are ready to receive the car. :: User takes over.
* Operator emergency. The user is unresponsive or active indications are that they require emergency assistance. :: auto-park in the nearest assistance area, (video monitored), emergency response crews also en-route.
* Unpredictable catastrophic failure. Internal/External doesn't matter. The vehicle is no longer responsive to the global mesh and a timeout condition occurs. Emergency response is dispatched to the area of incident automatically.
The 'driver taking over' in your scenario would be that final case, however most of those incidents could either be detected and minor failures avoided/tolerated or binned in to the second category. Any other cases are extreme and would result in a cascade anyway, likely even if the human were paying attention as normal.
Thus, the low hanging fruit of travel on the freeway IS the low hanging fruit. Developing a space reserved for automated vehicles and preferably app assisted ride-sharing/carpooling.
Humans can see the scenarios below coming from enough distance to mitigate the problem and AIs have not yet demonstrated equivalence:
1. Slow/erratic person or vehicle suddenly veers into the path of travel. AIs so far tend to just be ultra-safe here and stay far away, which is not the same thing as understanding the situation.
2. Construction zone path of travel is suddenly obstructed. (I'm picking a major scenario to exemplify a class of scenarios where humans surpass AIs still in predicting their environment.)
3. A vehicle ahead performs an emergency maneuver, which communicates unseen/undetected driving hazards. A human can reason from what the vehicle did to what the hazard might be, and immediately begin mitigating the hazard. AIs have not been very forthcoming with detailed behavior descriptions, but they all appear to still be mainly designed as advanced control loops (that is to say, mostly stateless). And yes, it's going to be a legal morass when AIs begin to follow "rules of behavior," since there will always be exceptions.
>But since there are no prices, there are no products, there are no actual announcements, it seems to boil down to Google feeling the heat as the company that used to be associated with the notion of bring Self Driving cars to the market. I wonder if they have been doing consumer surveys on what people think about Self Driving cars and finding out that Google is rapidly dropping from the radar of most people.
There is a product and it's currently outfitted to FCA Pacifica Hybrid minivans.
>They don't make cars, they can't get people to partner with them, and they haven't been successful at showing meaningful progress. Meanwhile in the bay area you can't drive down any freeway and not see some chortling Tesla owner talking to their friends while the car moves them along in rush hour traffic.
Waymo was never in the business of making cars. As for not getting anyone to partner with them what exactly would you call their partnership with FCA?
For the sake of argument lets say the typical "net profit margin" on a LIDAR unit is 10% (the higher the margin the better this gets). Now if $x is the price of the typical LIDAR unit, and .1x is the profit. and Waymo can make an equivalent unit and sell it profitably at .1x then they could sell their units for .5x (1/2 the price of the current market) and make .4x$ in profit (roughly 4x what the other manufacturers are making).
To me that would be a press release that made sense. It would say "We're going to own the LIDAR market with this product, and all the self driving cars from everyone are going to use them and when you multiply by the number of cars that is going to make use billions of dollars."
That is a business announcing they have a huge advantage that they are going to use to make huge profits which will keep them in a leadership position for making the gear people use in deploying self driving cars.
So for me it was odd that they announce what seems like a huge advantage but they aren't selling it to anyone except their one existing partner who opted not to participate in the press release. How did you interpret their press release?
The way I read the press release, it's a way of saying "Self-driving cars are here, they are reliable, and they're not just a rich person's toy. Every car sold on the roads will soon be self-driving, it's the biggest innovation in automobiles since the Model T, and you better have a piece of it or you will be left behind." Then they get all the major auto makers on board, and that becomes a self-fulfilling prophecy, one where they have a monopoly position on the most crucial component (which is actually the software, not the LIDAR).
Google is very well acquainted with spending billions of dollars on "free" products as a way of deepening their economic moat on the component they make money on. This is just an extension of this strategy to a market that's at least double the size of the total advertising market they've traditionally gone after.
Thomas Watson, president of IBM, 1943
Is the LIDAR market really just $1-3b because of how expensive they are now?
If this line is true, it means Alphabet is not free from the Innovator's Dilemma. That means they have a very systemic risk of being undercut out of a market.
Anyway, it doesn't make any of what you said wrong. It's just interesting because avoiding the dilemma was one stated goal for Google when creating Alphabet.
And, well, another possible explanation is that they have a limited productive capacity, so they can not physically sell to the entire market.
The tricky thing about Innovator's Dilemma situations is that it's very difficult to predict a priori which $1M markets will be stuck at $1M and which will eventually grow to become multi-billion, because this usually results from changes in consumer behavior or technological capabilities that haven't happened yet. It's a "dilemma" because the companies that overlook these opportunities are acting rationally.
But in the ideal case, owning the self-driving transportation network in most major cities, having a strong network effects(assuming shared rides+brand+other stuff) and selling rides, is worth hundreds of billions or more.
And not sharing tech, especially possibly superior tech is a good strategic move.
Because let me assure you that an affordable, reliable, accurate and mass-produced LiDAR would be a complete game changer in so many industries, not only automotive: Robotics/Drones, AR/VR, industrial, construction, surveillance, etc...
Give this technology a small enough package and price and it would go into literally billions of smart devices.
Selling the sensor as an off the shelf part is a distant second to their strategic platform and data goals.
"They can't get people to partner with them" because these companies are afraid of being marginalized like IBM was with MS-DOS. They are just watching the dust settle to see if they can buy and build enough on their own to stay competitive.
Finally, your cash flow analysis doesn't actually tell us how much Google has invested here. It could be $10B for all we know.
Is Google ahead for a final product? Quite possibly, but they don't have a product and Tesla doesn't look far behind sortware wise, though it is hard to tell.
Krafcik also mentions they are making the lions share of their progress now using their simulation engine, where they can model every conceivable variation of any bizarre/difficult encounter they have on real roads. In sim they accumulated a billion miles in 2016.
This never made sense to me. You certainly need enough data, but how you interpret and process that data is far more important.
Waymo's autonomous platform is a frankenstein of various machine learning techniques, and much of it isn't glamorous, it's less contigent on big breakthroughs than it is on elbow grease. Google demoed as proof-of concept full autonomy in 2012, and much of what they've been doing in the 4 years between then and now is the tedious job of addressing and validating their system across the full spectrum of edge cases that must be dealt with if they ever hope to foist their safety critical software upon the public.
It's not clear to me that Tesla's current development paradigm will ever be sufficient to completely take the human out of the loop. Tesla's approach is incremental, and I suspect they'll have to make some big changes if they wish to fully close the gap. Waymo has kept their eye on the prize from day 1.
Think of it this way: if Tesla wants to test a particular algorithm for a particular driving situation, they can "play it back" over an enormous amount of real-world situations. They will have tons more potential edge cases with which they can validate their algorithms.
Sure, they can push a beta algorithm to cars and record high-level decision making between human & algo, verifying it's not totally out of whack. But that's hardly something that is going as training data into the models.
Big data is no where near as much a competitive advantage as it was three years ago. It seems not everyone outside the field has noticed that though.
Image classification seems like it would be very different, most importantly that 99.9% "correct" would be a great achievement, but for self-driving cars a .1% failure rate would be completely unacceptable.
Please oh wise ones how do we simulate nlp data, numeric data, finance data, biological data and anything else machine learning is used for.
Oh you are able to classify dogs and cats in images after a 2 hour youtube. How nice.
Renesd is correct that "big data" is overblown. There are diminishing marginal returns - you need orders of magnitude more data for the same incremental gain (and this blows up well beyond however millions of cars Tesla can hope to run).
You're correct that data augmentation is only a marginal technique to squeeze out more performance, and not generally possible in many domains.
Big public opinion perspective here too.
A big advantage I see in Tesla's strategy is that all of their cars are now shipping with full self-driving hardware. Even if that hardware isn't actually used to control the car, Tesla will have an order of magnitude more real-world data than anyone else.
Other companies seem to be taking a generalized AI approach. Their cars work everywhere, but how well?
It'll be exciting to see which approach works best.
Having high res maps certainly helps, and if you have that data, then why not use it?
It is no different than what people do. If you drive the same route all the time, you have some expectation of what lies ahead.
This has to be really old information, though. They have that video of the car detecting school bus stop signs and a police officer directing traffic. A stop light is child's play after that.
If Google has an approach that works for them and it depends on the maps - that's fine. I'm just pointing out that they (probably) made a decision a long time ago that they are going to use those maps to further their self-driving technology. It's a design choice, with pros and cons, like any other decision.
You are wrong. Streetview vans have been outfitted with LIDAR for years and generate 3D models as they drive. Just because you do not see the point-cloud data in your browser doesn't mean Google doesn't have it.
I'm sure someone built a hack that let you explore that directly.
I'll bet the number of people that buy a individual Intel CPU are a fraction of a percentage of the general population. So why do I see Intel ads on television?
To answer your Intel/TV ad question more directly: because there are a lot of people on Intel's payroll whose salary directly depend on them placing ads on TV.
I do not demand you see my perspective, not even sure that's how I see it. But really, how do we know that Intel's branding campaign was valuable to them? It seems obvious, but if it's super obvious, it should be easy to explain.
It's very much a binary situation like Siri. Either the technology works and is 100% effective under all conditions or it doesn't. If it doesn't then people aren't going to look at it as a key purchasing differentiator because they won't really rely on it. Because of this I think we have at least a decade if not more before it matters who is winning the race or not.
And I have blankets that I sleep great under in winter and summer and others that make me wake up in a puddle of sweat. And some I take camping and others that are very comfortable but very delicate.
So no I wouldn't say that anything I own works 100% of the time. Some work more than others, and good stuff generally works in several situations (or not, when it's optimized for one use case) - but overall, if your threshold for success is 'pretty well', then any self driving tech that isn't worse than humans would be good enough.
They get out of Mountain View, too. There was one stretch where a Google autonomous SUV was making the same left turn in Fremont at the same time 3 days straight. We saw it when I drove the kids to school.
In my view, Google/Waymo has the most experience and most miles of true autonomy, but I expect they're suffering from the same conflicted management syndrome that scuttled the rest of their robotics efforts. Alas.
What Tesla makes is not just a press release, but a fully production car that's in the market TODAY which you can buy TODAY with all the necessary hardware already in place. All that's left is the software, which they are currently working on. Has Alphabet/Waymo got actual production cars on the market with the necessary hardware in place?
And these "pre-recorded" videos show an entire trip – from multiple camera angles – of what we're told is entirely production hardware, not a hacked-together mule loaded up with a pile of sensors on a roof rack. Is Alphabet/Waymo testing with production grade hardware built into a car at the factory?
That video has exceptionally high production values indicative of a carefully stage-managed performance on near-empty streets with exotic hardware that's never going to be used in any consumer car. I don't mean to diminish Google's achievement, but it's hardly the same as actually shipping hardware to customers.
Waymo is down to 1 emergency disengage for every 5000 miles of driving, and that's a far more suitable metric for progress.
They didn't want to sit back and just let them monetize their customer's driving habits. Some manufacturers err on the side of customer privacy whilst others would prefer to own the revenue stream.
The phone company space is the most incestual circle jerk mess imaginable. Judge Greene has to be rolling over in his grave!
(Also here in case of paywall: http://www.theverge.com/2016/10/24/13389592/att-time-warner-... )
Jump ahead to 2:20 to see the segment:
That's not their "style". they like to focus on r&d. and at that , they seem to be doing lots of important work(and maybe showing the world a new model, well see).
I wonder though, is there any other breakthrough r&d project with enough comeptitive advantage they could spend those billions on, without hurting their brand(unlike something like robotics) ?
So the economist in me is really wondering what that capital is doing sitting around when its owners are getting trounced for loosing their leadership edge.
Holding large cash reserves gives these big companies a chance to pounce when an opportunity does present itself. That's also why prices for genuine innovation tend to be kinda insane. It's a market with one seller and max 3-4 buyers, so not exactly competitive.
Do chortling people buy Teslas or does the Tesla inspire chortles?
These things will get cheap as soon as they're being built in quantity 100,000, instead of quantity 100. Really cheap if they can be made with standard CMOS processes, which has been done experimentally. Most are GaInAs technology, which is expensive.
Sort of. "Quanergy has not yet demonstrated a version of the S3 that performs to the specifications that they announced at their press conference"
Comes with open source libraries and is a pretty great piece of equipment for indoor navigation.
One advantage is that the average brightness will drop with a square-law from the other sources. This means that sources more than several hundred meters should have rapidly dropping effects on SNR. That doesn't mean that you couldn't do a particularly bad job of designing the receiver (intolerance to interferers) or a particular good job of designing a jammer (knowledgeable intentional interferer).
It's a very interesting question especially if self driving motorcycles take off which have the ability to move in highly unpredictable ways.
You can tell if someone is aiming a big light source at you; all the pixels show the same range, probably zero.
Those rotating machinery Velodyne things may not be as immune, but that technology isn't going to be high-volume.
In the general case of data affected by noise, it shouldn't be a problem: These things run a simulation of the external world, and every new input frame is never fully trusted, just used probabilistically to update the simulation. It is expected to carry noise, so if your sensors would tell you something like "a pedestrian in the sidewalk just jumped 10m in the air", you'd deduce "the guy most probably has kept walking as he was doing before, he possibly jumped instead, or he might have changed directions instead".
Still - that's a large fraction of the cost of vehicle, so I don't see it as that great of a breakthrough on price.
Honestly, I personally think that we'll see flash LIDAR or some other non-mechanical scanning tech (along with machine vision cameras - or maybe only such cameras) being used. Indeed, if machine vision cameras could be made to see in a wider range of wavelengths (maybe into near IR and UV?), and also have quick adjustable focus, and some way to deal with dirt and debris - such cameras would probably be the best way to implement sensing.
Ultimately, the issue isn't so much with the sensors, but with the software; integration of the sensor data into a cohesive whole and then interpreting it properly is far from a solved task (deep learning seems to work well?).
/I'm probably rambling out of my nether regions, tho
I wonder if more and more vehicles have now a radar, isn't it unhealthy to be exposed to so many radar on the road? I mean it's well known that military grade radar can be very unhealthy. I would have assumed that LIDAR plus cameras plus ultrasonic would be enough.
And health effects start happening if you absorb more than 1kW on a small area of your skin (burns, eye damage etc) - unlikely to happen if the total emission is sub-kilowatt.
(I presume there are regulatory limits for this sort of emission anyway, by FCC or otherwise, but I don't know for sure...)