Hacker News new | comments | show | ask | jobs | submit login
Alphabet’s Waymo Cuts Cost of Key Self-Driving Sensor by 90% (bloomberg.com)
311 points by smaddali 135 days ago | hide | past | web | 175 comments | favorite



Ok, that was a fairly odd press release. I thought that Waymo might be going to sell a LIDAR sensor that anyone could use in their projects, but that isn't the case.

They mention that they have stuck in Chrysler's Pacifica minivan (but this from a previous partnership agreement) and there isn't anyone else participating. And Krafcik says again and again things like "potential", "some day", and "we can imagine". He mentions their LIDAR tech can tell which way a pedestrian is facing.

But since there are no prices, there are no products, there are no actual announcements, it seems to boil down to Google feeling the heat as the company that used to be associated with the notion of bring Self Driving cars to the market. I wonder if they have been doing consumer surveys on what people think about Self Driving cars and finding out that Google is rapidly dropping from the radar of most people.

They don't make cars, they can't get people to partner with them, and they haven't been successful at showing meaningful progress. Meanwhile in the bay area you can't drive down any freeway and not see some chortling Tesla owner talking to their friends while the car moves them along in rush hour traffic.

Additionally:

For me the interesting thing is that Tesla has spent perhaps 5B$ on developing a self driving Model S between 2011 and 2016. And in that same time period Google went from $45B of cash on hand to $83B by Q3 of 2016, so they picked up and literally sat on nearly $40B over the last 5 years. Guess what? Cash sitting in the bank doesn't invent things, it doesn't build things, it doesn't "change the world" and it doesn't make you a leader. Can you imagine where they would be if they had used $10B of that to build a competitive electric car company?


Waymo is partnered with Fiat-Chrysler. That's a car company.

The Chevy Bolt is finally shipping in volume. Deliveries started in late December. An autonomous version is scheduled for next year.[1] (It involves Cruise Automation technology, which is worrisome.) Tesla now has a competitor that is building electric cars in volume. The Chevrolet Division of General Motors is good at making large numbers of cars.

Michigan now has a self-driving car law. It's claimed to be the least restrictive, but it's very similar to California's. As with California, the manufacturer is totally responsible for anything that goes wrong while in automated mode. As usual, Uber is complaining about this.

[1] http://www.chevybolt.org/forum/4-2017-chevy-bolt-news/5625-a...


The Waymo partnership with Chrysler is mentioned in the article. They announced that partnership back in May[1]. They dropped some press releases on their self driving mini-van in December [2] where it was noted among the press that "Other automakers, including Ford and General Motors, have made even more aggressive moves on the self-driving car front."

And then here in January we have this content free press release.

I'm a big fan of figuring out self driving cars. As the evolution of self driving cars unfolds, I become convinced that Google (or a subsidiary via Alphabet) is going to end up being much of a player in the space.

[1] https://www.bloomberg.com/news/articles/2016-05-03/fiat-goog...

[2] http://www.usatoday.com/story/money/cars/2016/12/19/get-firs...


Time will tell, but I think the biggest red flag for Google/Waymo was how many senior personnel left the project in the last year. It wasn't just a few people - it was almost anybody that had been named in every previous press release, and many other people as well.


Did they move companies? If so, which companies did they move to? I am guessing this could already be found publicly?


Some left to start their own companies, some went to Uber, and some haven't publicly stated yet. Everything I said is easy to verify via Google (there are articles written about it). The heavy hitters are Sebastian Thrun who left many years ago, so that's now really news; Chris Urmson, who was the face of the project, left about a year ago; Anthony Levandowski left to start his company (but really to join Uber since that was clearly the idea from the start) and took some people with him; and two of the lead software engineers started Nuro.ai


I think you messed up at the end of your post. Are you missing the word "not" here?

>I become convinced that Google (or a subsidiary via Alphabet) is going to end up being much of a player in the space.


Yup. Can't edit it now sadly.


597 cars delivered in 2016, thats not volume. And they have stated that it will not be availabe in all markets, in europe it will only be available in Norway.

Source for delivery

https://electrek.co/2017/01/05/gm-579-chevy-bolt-ev-2016/


Isn't it supposed to be available in all of (or most of) Europe as the Opel Ampera-E? Various German e-mobility sources have published more than a few previews and stuff, would be surprising if it doesn't arrive here.

edit Opel also has a German preview page up on their website: http://www.opel.de/fahrzeuge/neuheiten-uebersicht/opel-amper...


Still low volume.

For now, the guvern program to subsidy the buy of an electric car in Germany is a big flop. Not because the money wasn't good, but we're used to jump in our Diesel Audi|BMW|Merz and drive 1000km until the next fill-up.

If all cars would be electric, the economy would stand still, while loading batteries. So, no.


As far as I know, Norway first, then Germany and some other european markets beginning in 2018. They lose money on every car, so it seems like they are limiting production and only delivering cars in California, where they get ZEV credits.


Chevy doesn't have an intercity fast charge network. Tesla will win any buying decision with a manufacturer that doesn't have one, or until a sufficient national vendor-independent network appears.


Perhaps we should advocate that these sorts of connectors be made non-applicable for patent protection? As a consumer, I have no interest in car manufacturers being able to prevent competitors from being compatible.


As a Tesla investor, I don't want legacy auto manufacturers getting a leg up after Tesla was the one who poured tens of millions of dollars into their charging network.

I would fight this legislation every step of the way, even if I had to liquidate all of my TSLA holdings to do so. You (legacy automakers) want to not take EVs seriously until someone spent the last decade making them practical and desirable, and then try to use legislation to catch up? Tough.


Looks like you're going to have to fight with Tesla[1] then:

> If we clear a path to the creation of compelling electric vehicles, but then lay intellectual property landmines behind us to inhibit others, we are acting in a manner contrary to that goal. Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.

1. https://www.tesla.com/blog/all-our-patent-are-belong-you


Not at all. Musk has said there are other carmakers using their patents; great! That doesn't mean they're building out charging networks globally.

Tesla is a member of the new CCS fast charging consortium (which supersedes the current "fast" charging CHAdeMO standard) [1], but they're the only ones still building out their manufacturer charging system network.

[1] https://longtailpipe.com/2016/04/18/tesla-joins-150-kw-ccs-f...


>You (legacy automakers) want to not take EVs seriously until someone spent the last decade making them practical and desirable, and then try to use legislation to catch up? Tough.

The Supercharging Network isn't as big a deal as you're letting on. A standard will be achieved, fuelling stations will install them (they own a lot of real estate) and the minimal advantage this is now providing Tesla buyers will disappear. A charger network is not a moat.


Why would you go to a fueling station when you can charge at home, at Superchargers, and destination chargers at businesses? No one is going to gas stations again with EVs, even if they convert to EV charging stations.


>Why would you go to a fueling station when you can charge at home, at Superchargers, and destination chargers at businesses

How is a Supercharging station different from a fuelling station?

Other EV users won't go to a "Tesla SuperCharging Station", they'll go to a "BP FastCharge" or "Exxon MegaCharge" station (or some entrepreneurial EV charging outfit).

My point was, the Supercharging Network isn't a big advantage.


You will only ever need a Supercharger if you're traveling more than ~300 miles in a day; otherwise, you're charging nightly at home.


That seems like a fight you're going to lose. As for me, I'm on the side of open standards (because they are so obviously beneficial).

That might want to be an area where you "pick your battles," and save your energy for something else.


As of February 2016, there were 1530 CHAdeMO high-powered chargers in the US, compared to only 253 Tesla Supercharger stations. Tesla is falling behind.


You're right quantity-wise, but then you have https://twitter.com/elonmusk/status/812708946225963008 . I'd be surprised if Tesla EV's don't soon charge faster than it takes to fill up gas, within a couple years even. If EV's in general take off though, I think it's true that Tesla may have trouble keeping up with the larger market in terms of chargers.


I feel like people keep underestimating how difficult it is to build self-driving hardware and software. You can build a prototype in a couple of month with a small team of people as comma.ai showed, but getting from there to something that is at least as reliable as a human driver is still an open problem.

Building the car itself, creating a ridesharing network, finding parters, swaying consumer perception and even regulatory challenges are going to be walk in park in comparison.

Google/Waymo is well positioned to be the first to solve this problem, since they have been working on this for a decade and because Google has a big advantage in Machine Learning and Computer Vision research. On the other hand Tesla already has cars on the road collecting data.


> On the other hand Tesla already has cars on the road collecting data.

but not lidar data, which is why I agree with your statement that google is well-positioned. lidar is expensive now, but I expect self driving cars to be the reason that changes.


How important is lidar? Is it some kind of a silver bullet for self driving cars?


You need both visible cameras and LIDAR (and RADAR) currently.

Cameras give you rich intensity information: you can easily identify road markings, signs, etc provided the scene is well illuminated. Depth from stereo is OK, but struggles if the scene is featureless. Cameras perform about as well as your eyes in inclement weather. The idea that cameras don't work at night isn't entirely true, because your car should have headlights.

LIDAR is used for robust distance measurement. You can make spot measurements on pretty much any non-specular (shiny) surface. It's active so it works without external illumination (so you get 360 3D vision at night, not just where your lights point), it's accurate enough for driving (cm-level at 10s of metres) and by paralleling sensors you can get realtime performance. The Velodyne system uses 64 rx/tx pairs for ~1mpt/s. In practice you get around 20k LIDAR points per camera image because those 1 million points are spread over a hemisphere and your camera is imaging at 30 or 60 fps.


Tesla seem to think they'll be OK with radar and cameras, and not lidar - that's the current hardware revision.


True, probably should be an and/or, but almost certainly you'll have a better time using both - RADAR's spatial resolution isn't nearly as good. Also probably because if they tried to put a LIDAR rig into a consumer car it'd push the price considerably. Unless they could magically get the price down (as Waymo claim) it would be a healthy fraction of the cost of the car. The Velodyne x64 unit cost about $75k not too long ago.

You can do weird tricks with RADAR though, like the two-car lookahead thing that Tesla has implemented. You can do that with LIDAR if you're looking at a mirror (you'll measure the distance to thing in the mirror), but typically multipath returns don't have a high enough SNR to be useful.


When does one ever drive in a featureless place? Are you talking about an empty street covered in snow? Or do you have any other situation in mind?

If a car can self-drive anywhere, but will refuse to take control in an empty street covered in snow, I guess that's not a fatal flaw.


Street surfaces are relatively featureless (uniform grey) at low resolution or quality and a lot of matching algorithms will fail on them. With modern high resolution cameras and algorithms like SGM (see Heiko Hirschmuller's work) you can do pretty well nowadays, but it's not a panacea.

It doesn't necessarily need to be absolutely featureless: more specifically stereo matching suffers when there is local texture that is not sufficiently unique along an epipolar line (normally we use rectified images so epipolar = along the image rows). For a concrete example, if I showed you (or a computer) a small patch of road (< 15x15 px square) and told you to find its location in another image, you would struggle because of the ambiguity. This happens all the time 'in the wild'. Cars are shiny, which means specular reflections everywhere; global illumination differences are fine, but local differences cause problems. Matching surfaces like glass is also hard. Someone else mentioned the sides of artic lorries.

LIDAR avoids a lot of potential confusion, but I'm not suggesting that it's a catch-all. It's time consuming to scan and the data are sparse. The best systems (should) fuse data from all the different sources to maximise confidence.


A light colored semi trailer against a light colored sky has already been a fatal flaw.


It probably would have prevented the Tesla crash with the semi that cameras couldn't see due to low contrast with the sky. So I'd say pretty important. I'm actually shocked that Tesla has done so well without it.


At this point, LIDAR is still better than other methods for sensing and localization. However, it's quite expensive still, which is why doing sensing with cameras only has been a hot research topic.

You won't get by on LIDAR alone though. You'll need cameras for stuff like identifying road lines and signage.


from my understanding lidar is an improvement over cameras because you can use them even if there's no light for the cameras, as well as providing 3D data.


How about someone solves the way-easier problem of boring highway driving first?

I don't understand why the full monty has to be in version 1.0. Give me a good automation on the low hanging fruit, and let convergent evolution of society, infrastructure, and technology iterate to the more difficult use cases.


That is what Tesla and other car manufacturers are doing essentially, isn't it? However, this will always require a human to be paying attention. Highway driving is boring until it suddenly isn't. Driving perfectly on a highway is just as difficult as driving perfectly in a city.

As automation becomes better humans will be paying less and less attention to the road, making this technology somewhat dangerous in the interim. There has already been a fatal accident involving a Tesla on autopilot where the driver was watching Harry Potter.


It is my belief that the boring highway situations decay in the following paths:

* Predictable termination. The user must affirm that they are ready to receive the car. :: User takes over.

* Operator emergency. The user is unresponsive or active indications are that they require emergency assistance. :: auto-park in the nearest assistance area, (video monitored), emergency response crews also en-route.

* Unpredictable catastrophic failure. Internal/External doesn't matter. The vehicle is no longer responsive to the global mesh and a timeout condition occurs. Emergency response is dispatched to the area of incident automatically.

The 'driver taking over' in your scenario would be that final case, however most of those incidents could either be detected and minor failures avoided/tolerated or binned in to the second category. Any other cases are extreme and would result in a cascade anyway, likely even if the human were paying attention as normal.

Thus, the low hanging fruit of travel on the freeway IS the low hanging fruit. Developing a space reserved for automated vehicles and preferably app assisted ride-sharing/carpooling.


Good analysis, but I think we should call out the "predictable catastrophic failures."

Humans can see the scenarios below coming from enough distance to mitigate the problem and AIs have not yet demonstrated equivalence:

1. Slow/erratic person or vehicle suddenly veers into the path of travel. AIs so far tend to just be ultra-safe here and stay far away, which is not the same thing as understanding the situation.

2. Construction zone path of travel is suddenly obstructed. (I'm picking a major scenario to exemplify a class of scenarios where humans surpass AIs still in predicting their environment.)

3. A vehicle ahead performs an emergency maneuver, which communicates unseen/undetected driving hazards. A human can reason from what the vehicle did to what the hazard might be, and immediately begin mitigating the hazard. AIs have not been very forthcoming with detailed behavior descriptions, but they all appear to still be mainly designed as advanced control loops (that is to say, mostly stateless). And yes, it's going to be a legal morass when AIs begin to follow "rules of behavior," since there will always be exceptions.


But this accident was due to a "bug" in the model S' sensor suite and software, and would reportedly not happen with a Lidar. I actually trust a self-driving Google car on a highway more than myself when it's 2AM in the morning, or in a fog.


I think the idea is that you can start by having complete automation (without requiring driver attention) on highways, before enabling it everywhere else. Tesla's current autopilot is neither.


I found your reply kind of odd. Why would Waymo get it into the business of selling LIDAR sensors so that hobbyists could use them in their projects when that clearly isn't their business model.

>But since there are no prices, there are no products, there are no actual announcements, it seems to boil down to Google feeling the heat as the company that used to be associated with the notion of bring Self Driving cars to the market. I wonder if they have been doing consumer surveys on what people think about Self Driving cars and finding out that Google is rapidly dropping from the radar of most people.

There is a product and it's currently outfitted to FCA Pacifica Hybrid minivans.

>They don't make cars, they can't get people to partner with them, and they haven't been successful at showing meaningful progress. Meanwhile in the bay area you can't drive down any freeway and not see some chortling Tesla owner talking to their friends while the car moves them along in rush hour traffic.

Waymo was never in the business of making cars. As for not getting anyone to partner with them what exactly would you call their partnership with FCA?


Fair enough, if it helps here is what is odd. Let's say for the point of argument that Waymo does, in fact, have a LIDAR product that is every bit as good as what everyone else is selling but they can make it at 1/10th the price.

For the sake of argument lets say the typical "net profit margin" on a LIDAR unit is 10% (the higher the margin the better this gets). Now if $x is the price of the typical LIDAR unit, and .1x is the profit. and Waymo can make an equivalent unit and sell it profitably at .1x then they could sell their units for .5x (1/2 the price of the current market) and make .4x$ in profit (roughly 4x what the other manufacturers are making).

To me that would be a press release that made sense. It would say "We're going to own the LIDAR market with this product, and all the self driving cars from everyone are going to use them and when you multiply by the number of cars that is going to make use billions of dollars."

That is a business announcing they have a huge advantage that they are going to use to make huge profits which will keep them in a leadership position for making the gear people use in deploying self driving cars.

So for me it was odd that they announce what seems like a huge advantage but they aren't selling it to anyone except their one existing partner who opted not to participate in the press release. How did you interpret their press release?


The LIDAR market is estimated at $1-3B by 2020. The automobile market is estimated at around $1T. Owning a $1B market is pocket change for Alphabet; it's not even worth their while. Being cost-competitive at owning a $1T market would make them the most valuable company on earth.

The way I read the press release, it's a way of saying "Self-driving cars are here, they are reliable, and they're not just a rich person's toy. Every car sold on the roads will soon be self-driving, it's the biggest innovation in automobiles since the Model T, and you better have a piece of it or you will be left behind." Then they get all the major auto makers on board, and that becomes a self-fulfilling prophecy, one where they have a monopoly position on the most crucial component (which is actually the software, not the LIDAR).

Google is very well acquainted with spending billions of dollars on "free" products as a way of deepening their economic moat on the component they make money on. This is just an extension of this strategy to a market that's at least double the size of the total advertising market they've traditionally gone after.


"I think there is a world market for maybe five computers."

Thomas Watson, president of IBM, 1943

Is the LIDAR market really just $1-3b because of how expensive they are now?


It's possible, but if LIDAR is actually a fundamental breakthrough technology at $7500/per that opens up lots of other new markets, it will require buy-in of many, many entrepreneurs seeking to find uses for this technology. Betting on that likely looks a lot worse to Wall Street than betting on self-driving cars.


> Owning a $1B market is pocket change for Alphabet; it's not even worth their while.

If this line is true, it means Alphabet is not free from the Innovator's Dilemma. That means they have a very systemic risk of being undercut out of a market.

Anyway, it doesn't make any of what you said wrong. It's just interesting because avoiding the dilemma was one stated goal for Google when creating Alphabet.

And, well, another possible explanation is that they have a limited productive capacity, so they can not physically sell to the entire market.


Alphabet's absolutely subject to the Innovator's Dilemma. I left Google a few months before it became Alphabet and I could think of a half dozen markets where they'd be potentially vulnerable to it. (I also personally had projects I worked on that made $100M and were canceled because they couldn't make a billion. $100M would be absolutely awesome in most other companies, but it's pocket change to Google.)

The tricky thing about Innovator's Dilemma situations is that it's very difficult to predict a priori which $1M markets will be stuck at $1M and which will eventually grow to become multi-billion, because this usually results from changes in consumer behavior or technological capabilities that haven't happened yet. It's a "dilemma" because the companies that overlook these opportunities are acting rationally.


If every car in the world uses LIDAR, how is it still going to be a $1-3B market?


Not every car in the world is going to use LIDAR unless someone comes up with viable self-driving car software. Right now, Alphabet/WayMo is in the best position to do that.


One guess: the hardware costs for self driving is tiny relatively, and will go towards zero.

But in the ideal case, owning the self-driving transportation network in most major cities, having a strong network effects(assuming shared rides+brand+other stuff) and selling rides, is worth hundreds of billions or more.

And not sharing tech, especially possibly superior tech is a good strategic move.


I found your reply kind of odd. Why would Waymo get it into the business of selling LIDAR sensors so that hobbyists could use them in their projects when that clearly isn't their business model.

Because let me assure you that an affordable, reliable, accurate and mass-produced LiDAR would be a complete game changer in so many industries, not only automotive: Robotics/Drones, AR/VR, industrial, construction, surveillance, etc...

Give this technology a small enough package and price and it would go into literally billions of smart devices.


I just came across this, and have not used it. May be something you are looking for?

http://scanse.io/


Don't forget another solid state scanner, available now with development kits - http://leddartech.com/modules/leddarvu/


Yes, it's early days though. What's currently out there generally lacks speed/coverage and accuracy. The real breakthroughs will come with solid-state LiDAR but it's still a couple years out from mass-market availability. Lots of companies working on it though.


I read Google's position to be much stronger than you suggest.

Selling the sensor as an off the shelf part is a distant second to their strategic platform and data goals.

"They can't get people to partner with them" because these companies are afraid of being marginalized like IBM was with MS-DOS. They are just watching the dust settle to see if they can buy and build enough on their own to stay competitive.

Finally, your cash flow analysis doesn't actually tell us how much Google has invested here. It could be $10B for all we know.


Public perception doesn't matter that much because the customers are car makers. What matters is their technology, which seems to be state-of-the-art and certainly better than what Tesla has.


Is it though? What Google has is certainly better than the publicly available lane and crash assist Tesla has, but Tesla has already shown demos of true autonomous driving, in fog and noting pedestrians, etc. It likely just needs a lot of training but cars are already on the road with hardware.

Is Google ahead for a final product? Quite possibly, but they don't have a product and Tesla doesn't look far behind sortware wise, though it is hard to tell.


Krafcik mentions in his presentation that their cars are down to .2 disengages every 1000 miles (which would more aptly be described as 1 disengagement every 5000 miles), and their cars are actively seeking out challenging situations, it's not just easy/random driving. This is incredible. They are leagues ahead of the nearest competition.

Krafcik also mentions they are making the lions share of their progress now using their simulation engine, where they can model every conceivable variation of any bizarre/difficult encounter they have on real roads. In sim they accumulated a billion miles in 2016.


A lot of folks have been saying that Tesla will win, because they will gather more data from real conditions.

This never made sense to me. You certainly need enough data, but how you interpret and process that data is far more important.


The popular idea that Tesla will just keep stuffing matrices down the throat of their training pipeline until a self-driving inference model emerges from the other end doesn't make a lot of sense.


Neural nets can take you most of the way there, but pattern recognition alone will not solve the driving problem to completion.

Waymo's autonomous platform is a frankenstein of various machine learning techniques, and much of it isn't glamorous, it's less contigent on big breakthroughs than it is on elbow grease. Google demoed as proof-of concept full autonomy in 2012, and much of what they've been doing in the 4 years between then and now is the tedious job of addressing and validating their system across the full spectrum of edge cases that must be dealt with if they ever hope to foist their safety critical software upon the public.

It's not clear to me that Tesla's current development paradigm will ever be sufficient to completely take the human out of the loop. Tesla's approach is incremental, and I suspect they'll have to make some big changes if they wish to fully close the gap. Waymo has kept their eye on the prize from day 1.


Also, how much of that data can you realistically send back to yourself? With Waymo they can pull the arrays from the cars nightly if they need to. Tesla, on the other hand, has to send the data back over a customers network connection which would be much more limiting. So Tesla might be getting tons of miles but if that's a billion not very detailed miles, it might not be as useful as a million incredibly detailed miles.


I think Tesla pays for the network connection to all of its cars. This sends back remote telemetry and other self-driving data. It also serves up software updates and monitors the cars network for intrusions via a VPN.


That network connection is using cellular, so at most it's an LTE connection but if I remember right, is actually just 3G. I can't believe they are pushing the kind of data people are talking about over that kind of connection.


It makes sense because virtually all machine-learning algorithms work better, and can learn faster, if you have more data.

Think of it this way: if Tesla wants to test a particular algorithm for a particular driving situation, they can "play it back" over an enormous amount of real-world situations. They will have tons more potential edge cases with which they can validate their algorithms.


Where exactly is all this data being stored and transferred? Nowhere. The car doesn't have the storage and it doesn't have the bandwidth and all indications are it isn't transferring anything that amounts to actual images of the many many cameras it has.

Sure, they can push a beta algorithm to cars and record high-level decision making between human & algo, verifying it's not totally out of whack. But that's hardly something that is going as training data into the models.


This used to be true. Modern machine learning needs way less data. The classic example is taking images and then transforming them in hundreds of ways (scaling, rotation, skew, etc) for training.

Big data is no where near as much a competitive advantage as it was three years ago. It seems not everyone outside the field has noticed that though.


I wonder if this would be true in the case of self-driving car algorithms, though (which I know nothing about). It always seemed like the hard part about self-driving cars was the 0.1% edge cases where something out of the ordinary could result in a catastrophe if not handled correctly.

Image classification seems like it would be very different, most importantly that 99.9% "correct" would be a great achievement, but for self-driving cars a .1% failure rate would be completely unacceptable.


We are 30 to 50 years away of a level 5 car.


Do you have more examples of "big data is not as much a competitive advantage", in the form of articles or research? I'm not in this field, but it's a fascinating development. It would be interesting to see to which degree it helps to perform automated transformations to increase the value of each piece of training data.


Lul thinking machine learning is only image classification


Thank you for the down-votes. I guess all the experts on HN know how easy it is to simulate training data because they all took the 101 course on how to rotate/resample images. That is uniquely a image classification technique.

Please oh wise ones how do we simulate nlp data, numeric data, finance data, biological data and anything else machine learning is used for.

Oh you are able to classify dogs and cats in images after a 2 hour youtube. How nice.


Both you and renesd are correct.

Renesd is correct that "big data" is overblown. There are diminishing marginal returns - you need orders of magnitude more data for the same incremental gain (and this blows up well beyond however millions of cars Tesla can hope to run).

You're correct that data augmentation is only a marginal technique to squeeze out more performance, and not generally possible in many domains.


From what I've observed of member behavior on HN, I suspect that the downvotes may be a response to tone as opposed to content.


You need to think this through more carefully. What would "playing it back" give you? Presumably all that would give you are a bunch of incidents where the human driver disagreed with the algorithm's output. That's it. We don't know whether the algorithm is actually wrong. Maybe the driver made a mistake. Maybe the driver wanted to make an illegal turn (very common). Maybe the driver was lazy and made a rolling stop. A human still needs to go through each incident that is flagged and manually label it.


the data they have is not detailed enough though has been the criticism


Argument is that Tesla has more representative / real world data.

Big public opinion perspective here too.


It makes me wonder, though, if Google is currently at a disadvantage because their fleet of self driving cars is teeny compared to what Tesla has and will soon have. That is, Tesla has tens of thousands of cars with Auto Pilot sensors on the road, and (afaik) they have access to all of that real-world data to train their algorithms.

A big advantage I see in Tesla's strategy is that all of their cars are now shipping with full self-driving hardware. Even if that hardware isn't actually used to control the car, Tesla will have an order of magnitude more real-world data than anyone else.


Lets not compare 1000 words of Reddit to 1000 words of Shakespeare. The quality of the data is very important.


But can they actually send all that data back to their data centers? If they are really capturing that much data, they would need to send it back using consumer home internet connections which doesn't seem realistic. I don't think the fleet size is as large an advantage if you can't actually get 100% of the sensor data.


It's important to note Google/Waymo's tech is limited to places that have been heavily manually mapped and test driven. They're basically creating a virtual track for their cars to follow, and presuming that changes in the real world can be detected and distributed to the entire fleet.

Other companies seem to be taking a generalized AI approach. Their cars work everywhere, but how well?

It'll be exciting to see which approach works best.


This comes up on HN quite a bit, but I have never seen anything that suggests that Google doesn't also use a generalized AI approach.

Having high res maps certainly helps, and if you have that data, then why not use it?

It is no different than what people do. If you drive the same route all the time, you have some expectation of what lies ahead.


This information is not current, but I know that they used to detect traffic lights by looking for them in their maps. By that I mean they knew where the traffic light should be, then did the basic geometry to figure out where that is in their image, and then determined the color of the pixels to figure out what the light was saying. That absolutely would not work without the maps. I am willing to bet that their approach fundamentally depends on the maps. That's fine, it just means they can't work in areas that aren't already mapped (and you have to have one of the maps stored locally in the car, as those files are gigantic).


> they knew where the traffic light should be, then did the basic geometry to figure out where that is in their image, and then determined the color of the pixels to figure out what the light was saying.

This has to be really old information, though. They have that video of the car detecting school bus stop signs and a police officer directing traffic. A stop light is child's play after that.


Why do you assume those other things are harder to detect? Computer vision is tough, and human detection is one of the most researched problems in the field. Uber's self-driving cars were running red lights not too long ago. And it's important to understand that a video is not a live demo.

If Google has an approach that works for them and it depends on the maps - that's fine. I'm just pointing out that they (probably) made a decision a long time ago that they are going to use those maps to further their self-driving technology. It's a design choice, with pros and cons, like any other decision.


And in steps google street view. They have been planning this for years.


Google Street View contains images, not point clouds. And based on my usage, it's updated roughly every year. This is not even close to sufficient for their self-driving cars.


> Google Street View contains images, not point clouds.

You are wrong[1]. Streetview vans have been outfitted with LIDAR for years and generate 3D models as they drive. Just because you do not see the point-cloud data in your browser doesn't mean Google doesn't have it.

1. http://google-street-view.com/about-google-street-view/


You can see the basic shapes that the images get mapped to in the browser.

I'm sure someone built a hack that let you explore that directly.

http://callumprentice.github.io/apps/street_cloud/index.html

https://flowingdata.com/2014/03/26/reconstructing-google-str...


Neat! I stand corrected. I wonder if the map data they have all over the country/world/wherever is comparable to the cities they test their cars in. If so, that would be fantastic.


Perhaps if the tech is good enough for now, it will become somewhat widespread, and then force a de-facto redesign of the traffic "API." E.g. maybe lights will have hidden IR identification beacons intended for self driving cars. Maybe traffic police will have special manual beacons that work on those cars while directing traffic, etc.


This is a great point. Additionally, in a world where there are many competitors with self-driving cars, I can't imagine roads not having and API of sorts to simplify the work cars have to do. Instead of painted line markers, why not IR beacons or something similar. Could be the same for road work signs or temporary barriers.


A software driving in a truck standing across the road, because confusing the truck is a road sign is nothing I would use autonomously. [1] And I would not call it autopilot, especially being in a product.

[1] https://www.theguardian.com/technology/2016/jul/01/tesla-dri...


I broadly agree that Tesla may take the lead. That said, I know they've announced autopilot being capable of handling fog and various weather conditions and have separately demoed their full autonomy, but I haven't seen anything combining the two.


Public perception doesn't matter that much because the customers are car makers.

I'll bet the number of people that buy a individual Intel CPU are a fraction of a percentage of the general population. So why do I see Intel ads on television?


Is there evidence that Intel Inside had a margin gain for the CPU product? Although not the tone of this article, the graphics suggest the II branding campaign had so much less to do with their growth than general computing trends.[0]

To answer your Intel/TV ad question more directly: because there are a lot of people on Intel's payroll whose salary directly depend on them placing ads on TV.

I do not demand you see my perspective, not even sure that's how I see it. But really, how do we know that Intel's branding campaign was valuable to them? It seems obvious, but if it's super obvious, it should be easy to explain.

[0] https://conversionxl.com/cro-vs-branding/


Because people buy pre-assembled computers and Intel with its "Intel inside" had made it so that consumers will pay more if Intel is inside. Not sure if people will pay more for the same car with Google's sensor.


Comforting shareholders is one potential reason.


Why does Koch Industries advertise on TV?


Public perception is irrelevant because self driving cars right now are irrelevant.

It's very much a binary situation like Siri. Either the technology works and is 100% effective under all conditions or it doesn't. If it doesn't then people aren't going to look at it as a key purchasing differentiator because they won't really rely on it. Because of this I think we have at least a decade if not more before it matters who is winning the race or not.


What technology works 100%?


My knife works pretty well. As does my blanket.


Not to take this into 'silly analogy' territory, but I have many knives that dull after two weeks of use and others that stay sharp much longer; some that work great to fillet fish and some that I can pry a screw out of an oak log with anf be fine; but none that can do both.

And I have blankets that I sleep great under in winter and summer and others that make me wake up in a puddle of sweat. And some I take camping and others that are very comfortable but very delicate.

So no I wouldn't say that anything I own works 100% of the time. Some work more than others, and good stuff generally works in several situations (or not, when it's optimized for one use case) - but overall, if your threshold for success is 'pretty well', then any self driving tech that isn't worse than humans would be good enough.


But the thing you missed is that having to have 50 different knives is a benefit :)


Yes, that's very true, but having to have 50 blankets isn't :)


your SO isn't a quilter are they?


Even if she were, I'm hard pressed to believe I'd consider the outcome a good thing...


Mathematics?


Got ya, mathematics doesn't work.


Smart phones


Every smart phone I've had has crashed at least once. Not to mention tons of wifi problems.


I can only assume he was joking


All I can say is that every time I drive through Mountain View, I am followed by or following or passing one of their self drivers, either the Lexus SUV or one of those little Logan's Run vehicles [0].

They get out of Mountain View, too. There was one stretch where a Google autonomous SUV was making the same left turn in Fremont at the same time 3 days straight. We saw it when I drove the kids to school.

In my view, Google/Waymo has the most experience and most miles of true autonomy, but I expect they're suffering from the same conflicted management syndrome that scuttled the rest of their robotics efforts. Alas.

[0] http://www.turbosquid.com/3d-models/max-ground-vehicle-logan...


The little guy looks more like a koala to me :) http://www.slate.com/content/dam/slate/blogs/browbeat/2014/0...


This is certainly a Google press release. On a meta level, what I find interesting is that when Tesla makes what is also basically a press release, few point that out. People just take at face value Tesla's pre-recorded demo videos, etc.


At least Telsa have published videos. (All videos are "pre-recorded" unless they're live, making it an odd qualifier to include and emphasize.) Has Alphabet/Waymo published any videos?

What Tesla makes is not just a press release, but a fully production car that's in the market TODAY which you can buy TODAY with all the necessary hardware already in place. All that's left is the software, which they are currently working on. Has Alphabet/Waymo got actual production cars on the market with the necessary hardware in place?

And these "pre-recorded" videos show an entire trip – from multiple camera angles – of what we're told is entirely production hardware, not a hacked-together mule loaded up with a pile of sensors on a roof rack. Is Alphabet/Waymo testing with production grade hardware built into a car at the factory?


Just want to call out that Tesla's videos seem to be a bit misleading. If you watch them closely, you can see they've made edits when switching from one camera angle to another. If you know the roads they are driving on, you can see the distances of road that they cut out. So even though they seem to be doing great, they have to actually demonstrated a fully self driving capability. Google let reporters test drive a car with no steering wheel or brake.


That might be all true, but it still isn't reasonable to say Tesla is only issuing press releases when in fact they're selling the hardware to paying customers today.


Google released a video of their Prius driving a blind man through the drive-thru at Taco Bell, five years ago.


This was basically an entirely pre-staged sequence. It might as well have been a movie set. This is marketing at it's finest, fiction presented as fact.


You seriously think driving through one Taco Bell in a mule at low speed is even remotely comparable to driving real streets at real speeds for extended distances with 100% final production hardware?


So you didn't watch the video, then.


Yes, I watched the video. And to jog my memory I just watched it again now. Excatly as I described earlier: "a hacked-together mule loaded up with a pile of sensors on a roof rack."

That video has exceptionally high production values indicative of a carefully stage-managed performance on near-empty streets with exotic hardware that's never going to be used in any consumer car. I don't mean to diminish Google's achievement, but it's hardly the same as actually shipping hardware to customers.


I do not believe that Google was being disingenuous with their blind driver goes to taco bell video. But that's besides the point, demo videos don't mean shit. Getting a car to noodle around the neighbourhood under supervised conditions is a far cry from having a vehicle that can do that reliably all the time.

Waymo is down to 1 emergency disengage for every 5000 miles of driving, and that's a far more suitable metric for progress.


I notice that this news piece is headlined "Alphabet's Waymo" thus avoiding mention of Google. I wonder whether this is good or bad for Waymo. I assume they're intentionally distancing themselves from Google and this is what it looks like.


Having spun/graduated out of X (also now an Alphabet subsidiary), Waymo is now a subsidiary of Alphabet and so by technicality it's more accurate to refer to it as Alphabet's Waymo, rather than "Google's Self-Driving Car"


It's good. From the article they mentioned that manufacturers didn't want to work with Google on self driving cars.

They didn't want to sit back and just let them monetize their customer's driving habits. Some manufacturers err on the side of customer privacy whilst others would prefer to own the revenue stream.


Probably neutral in the short term: like when the awful Pacific Bell renamed itself "Cingular," it might confuse a few people. In the long term, it's the same jalopy with a new coat of paint.


Admittedly I could be told I'm nitpicking, but Pacific Bell never renamed itself Cingular (not directly anyway). PacBell was bought by Southwestern Bell in 1997 and became SBC. In 2000, SBC Wireless partnered with BellSouth's mobile division to create Cingular. Cingular then bought AT&T Wireless in 2004. SBC later bought AT&T and renamed itself to the company it just bought, aka AT&T. In 2007, AT&T nee SBC bought BellSouth and thus finally owned all rights to the Cingular name, which promptly was dumped and instead renamed to AT&T.

The phone company space is the most incestual circle jerk mess imaginable. Judge Greene has to be rolling over in his grave!


The Wall Street Journal published a helpful chart of all the Bell Telephone spin-offs, mergers, and name changes. https://si.wsj.net/public/resources/images/P1-BZ033_LIONDO_1...

(Also here in case of paywall: http://www.theverge.com/2016/10/24/13389592/att-time-warner-... )


Thanks for the history! It almost makes me long for good old Ma Bell, who at least had the good sense not to change her name...


It's kinda hard to find these days (gotta love Viacom's legal team!), but Stephen Colbert did a hilarious segment [0] on the Colbert Report about the history of AT&T/Cingular back in 2007. Terminator 2 indeed!

Jump ahead to 2:20 to see the segment: [0] http://www.cc.com/video-clips/eamlaf/the-colbert-report-bear...


For those not in USA: http://imgur.com/a/4lIWw


All that renaming and buying out of companies was basically Ma Bell reforming herself like the liquid metal Terminator from Terminator 2. Hence the comment about Judge Greene rolling over in his grave.


Its a new Google meta before shutting down products, disassociate them from main brand to avoid reinforcing the prevailing meme.

https://didgoogleshutdown.com/


That site really enjoys saying that the outlook of things is sketchy. Apparently being redesigned is a sign of a sketchy future (blogger), as is the site owners not being aware of a recent redesign (sites). They also ignore a whole lot of other things that google has (nexus is dying because of pixel, but oddly pixel isn't on the site as its own line), etc.


The Economist this week has an article on Infineon's LIDAR-on-a-chip, which seems a lot more likely to get them out to that mass market than this.

http://www.economist.com/news/science-and-technology/2171210...


>> Can you imagine where they would be if they had used $10B of that to build a competitive electric car company?

That's not their "style". they like to focus on r&d. and at that , they seem to be doing lots of important work(and maybe showing the world a new model, well see).

I wonder though, is there any other breakthrough r&d project with enough comeptitive advantage they could spend those billions on, without hurting their brand(unlike something like robotics) ?


I expect there are several things. The interesting question for me is how people nominally running for profit businesses decided it was "better" to hold cash than to grow the business. This is acutely true at Apple which is in even worse shape than Google in terms of billions of dollars sitting around and not doing anything. The rumor of course is that they (Apple) are building a self driving car that they will announce at some point but programs that large are hard to keep completely quiet.

So the economist in me is really wondering what that capital is doing sitting around when its owners are getting trounced for loosing their leadership edge.


Not sure if you were at Google in 2009 when this question came up in the immediate aftermath of the financial crisis, but the answer is that opportunities are discontinuous. Economics in the real world doesn't work like "spend $x, make $y = $2x" in any sort of high-tech industry. Rather, it's "wait 10 years for the appropriate technology to be developed, spend whatever it takes to acquire it, then you have the opportunity to spend $x for the chance to make $100x". The gating factor is the development of the technology itself, which is unpredictable and depends upon certain intuitive leaps of faith by many, many researchers, sometimes working in concert and sometimes just exchanging ideas by chance.

Holding large cash reserves gives these big companies a chance to pounce when an opportunity does present itself. That's also why prices for genuine innovation tend to be kinda insane. It's a market with one seller and max 3-4 buyers, so not exactly competitive.


Makani. They are trying to effectively change the energy industry using kites. It is ridiculously ambitious.


except Tesla has not proved their current approach can get them to level 5 in any real way other then some handwaving, your comparing different things and the majority of value that will be created hasn't been won by anyone yet


> chortling Tesla owner

Do chortling people buy Teslas or does the Tesla inspire chortles?


.. or spend the whole $40B and cure all forms of cancer, once and for all!


Waymo isn't the first to announce a low-cost LIDAR. Quantergy announced one last year.[1] They even demoed it. It never shipped. Continental, the big European auto parts company acquired ASC's excellent but expensive technology last year, and promised a low-cost version. Hasn't shipped yet.[2]

These things will get cheap as soon as they're being built in quantity 100,000, instead of quantity 100. Really cheap if they can be made with standard CMOS processes, which has been done experimentally. Most are GaInAs technology, which is expensive.

[1] http://quanergy.com/s3/ [2] http://www.continental-corporation.com/www/pressportal_com_e...


Quanergy never shipped their product because they were not able to hit the required for performance for it to be usable. So far it looks like only Waymo and Velodyne have shipped LIDAR sensors good enough to the primary sensor on a self driving car. Everyone else is still at the "vaporware" stage.


> They even demoed it.

Sort of. "Quanergy has not yet demonstrated a version of the S3 that performs to the specifications that they announced at their press conference"

http://spectrum.ieee.org/cars-that-think/transportation/sens...


The best cheap lidar for indoor robots, and limited outdoor is built by microsoft, called "kinect". Beats the pants of any lidar that existed in 2005 that cost less than $100k, except on range.

Comes with open source libraries and is a pretty great piece of equipment for indoor navigation.


Minus the part where it isn't a lidar. Sure its a great sensor, but if you need a lidar, then you can't use a kinect. And alternatively, if you need 360 degree fov, etc. an actual lider is often better.


IR sensor array != to LIDAR. There is no pulsed Laser in Kinect.


What happens when every car on the road has lidar and your sensor is bombarded by oncoming traffic?


In general this shouldn't be a problem, unless the receiver saturates due to a relatively wide time aperture. So long as the receiver remains linear and temporal/spatial/code filtering is operating properly, it shouldn't be a problem to have 1000 vehicles in range. Because of the truly absurd bandwidth of an optical system (100THz relative to GHz radio) it's possible to keep other systems out of band.

One advantage is that the average brightness will drop with a square-law from the other sources. This means that sources more than several hundred meters should have rapidly dropping effects on SNR. That doesn't mean that you couldn't do a particularly bad job of designing the receiver (intolerance to interferers) or a particular good job of designing a jammer (knowledgeable intentional interferer).


Thankfully the large physical size of the car means that the amount of sensor data is going to be at a maximum when you're on a busy highway and fairly minimal. Something that is constantly modelled today.

It's a very interesting question especially if self driving motorcycles take off which have the ability to move in highly unpredictable ways.


With flash LIDAR, the duty cycle is very low. A ranging cycle is about 1μs, while you might cycle at 60FPS. The chance of interference is only about 1/10000 for each ranging cycle. If you put a bit of random jitter in the flash timing, you can avoid accidental repeated interference. You'll get a bad frame once in a while, but if the frames on both sides of the bad one look similar, you're OK.

You can tell if someone is aiming a big light source at you; all the pixels show the same range, probably zero.

Those rotating machinery Velodyne things may not be as immune, but that technology isn't going to be high-volume.


Doesnt this limit you to at least a 50 ms response time, before your CPU can "trust" the data? (before/current/next required to smooth before handing off to processing?)


In the odd case that the data is clearly garbage, you'll just drop it and keep going as before (so you add a frame to the response time at that specific moment).

In the general case of data affected by noise, it shouldn't be a problem: These things run a simulation of the external world, and every new input frame is never fully trusted, just used probabilistically to update the simulation. It is expected to carry noise, so if your sensors would tell you something like "a pedestrian in the sidewalk just jumped 10m in the air", you'd deduce "the guy most probably has kept walking as he was doing before, he possibly jumped instead, or he might have changed directions instead".


Lots of Detroit based news coming from Waymo. I wonder how long it will be until Waymo has a larger base of operations in the area and makes a significant hiring push?


May already be happening. Anecdote: A real estate agent in Ann Arbor told me this week that most of her recent customers are coming in to work for Waymo.


Why MI and not CA?


Far better to develop a self driving car in shitty road conditions, potholes, black ice, fog, drifting snow...


As someone living in an area with shitty road conditions, if they can conquer those that is the point at which I start becoming a believer.


Why not India then?


Auto companies (and suppliers and so on) have a lot of engineering operations in Southeast Michigan.


If their business plan is to partner with car companies, Detroit is where the car companies live.


My guess would be that MI as a "car state" with plenty of struggling cities is probably not shy about handing out sweet benefits to companies willing to set up shop there (especially for car related stuff as that probably plays great politically)


The press release is light on actual numbers. If they can build LIDARs much cheaper than the competition, and use Google's TPUs for the AI, they might have a genuine differentiator for a few years


If it's $7500 for a 3D LIDAR (that is, a cheaper version of the Velodyne LIDAR system) - then that puts in range of SICK and Hokuyo 2D LIDAR sensors, although both of those only have around 180 degree FOV - but they both have a similar price tag (IIRC, around $3-4K per unit).

Still - that's a large fraction of the cost of vehicle, so I don't see it as that great of a breakthrough on price.

Honestly, I personally think that we'll see flash LIDAR or some other non-mechanical scanning tech (along with machine vision cameras - or maybe only such cameras) being used. Indeed, if machine vision cameras could be made to see in a wider range of wavelengths (maybe into near IR and UV?), and also have quick adjustable focus, and some way to deal with dirt and debris - such cameras would probably be the best way to implement sensing.

Ultimately, the issue isn't so much with the sensors, but with the software; integration of the sensor data into a cohesive whole and then interpreting it properly is far from a solved task (deep learning seems to work well?).

/I'm probably rambling out of my nether regions, tho


Beside LIDAR, A/Google adds a radar now too.

I wonder if more and more vehicles have now a radar, isn't it unhealthy to be exposed to so many radar on the road? I mean it's well known that military grade radar can be very unhealthy. I would have assumed that LIDAR plus cameras plus ultrasonic would be enough.


Military grade radars can run at peak power output of over 1MW; airport ATC radars at around 100kW. Given the power limitation of cars, car radars will be under a kW for sure, probably more in the single-watt range.

And health effects start happening if you absorb more than 1kW on a small area of your skin (burns, eye damage etc) - unlikely to happen if the total emission is sub-kilowatt.

(I presume there are regulatory limits for this sort of emission anyway, by FCC or otherwise, but I don't know for sure...)


This is great but its still LIDAR. Which doesn't do to well when it rains, snows or its foggy out.


Earnings date must be approaching, try to distract that they're still just only an online ad company?


It's going to be alright. I didn't get a job at Google either.


But there were positive notes in your interviews! They'll be sure to call again next summer!... They say 3rd time's the charm.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: