As someone who commutes 35 miles twice a day, I can't wait to have a self-driving car. But I don't see how a car with no steering wheel is viable given today's infrastructure. What happens if I'm in a snowstorm and the lame markings are occluded? What happens if I need to drive down a dirt road or driveway? It seems like most drivers will need some way to take over from time to time...
As a commuting Canadian I'm interested in this being addressed as well, as I'm sure millions of others are. All the self driving lab work seems to be taking place in California. If I value my life, shouldn't I want to buy a self driving car that was designed in Michigan or Wisconsin? Who can even answer that question right now? Perhaps its a matter for self driving car v2.0 to address?
Researchers are working on this in Michigan. The University of Michigan has a new MCity test facility[0], and I know one of the projects they are working on is how the cars handle snow. They basically still do LIDAR and match the z-plane (height of objects nearby) to localize the car when road markings are hard to see.
Honestly, my prevailing theory is that this highlights how much further we really are from practical self-driving cars than Google's marketing department wants us to think. I think the self-driving car project needs to always sound 'a little over a year away' as good PR for Google, though they know it's much further from being truly serviceable.
A year away in California and other states with good weather.
I have a suspicion we will see some amazing demos of self driving cars in the snow and rain, once they are good at it. It's brought up every time we talk about being these cars to consumers, if Google or whoever have truly mastered these situations I think they would like people to know.
In the descriptions I've seen, the Google car doesn't rely on the lane markings in normal driving. The lanes are pre-mapped and it can use any mapped landmark (including but not limited to the lane markings) to determine its position relative to the lanes.
The Google car also does not drive in snow AFAIK. Maybe that's bad for you but it's not a deal breaker for the first deployment of self driving cars as there are plenty of places where it never snows.
Should be interesting to see how it handles detours and temporary road re-routes. How would a self-driving car handle a two-lane two-way double-yellow-line road with one lane closed and a flagger alternating both directions of traffic through the same lane?
I think it probably can read lane markings, as well as construction signs, flaggers, etc. I know it've seen videos of it detecting hand signals given by bicyclists. It just doesn't make sense to rely on lane markings during normal driving because lane markings are often or missing or obscured or just plain wrong.
So does blindly following lane markings that were incompletely erased and are now incorrect. Clearly the Google car does not do either of these things, because if it did then it would certainly have crashed many times over by now.
I've always assumed the current generation of auto-drive AI is going to be released, but rated for perfect-visibility climates only. This means it'll be fine for vehicles like self-driving 18-wheelers travelling back and forth through the Nevada desert; but the only place you'll be likely to see it in passenger vehicles is in the local taxi/car-sharing services (i.e. cars that never leave their metropolitan area) of cities in extremely temperate regions. So they'll get them in California, and Texas, but not anywhere northern-enough for snow or eastern-enough for monsoons.
If Google can map most of the modern world with Street View, why can't they equip those same roads with sensors (that don't need to be visible) for their self-driving cars? Or better yet, given they have HD satellite imagery, will they just know exactly where the car is on the road (to the lane) at all times?
Asphalt surfaces, in general, aren't long-lived and require regular maintenance -- likely, those sensors would be destroyed during maintenance (which almost always involves pouring molten asphalt). This means that Google would have probably have to regularly re-install the sensors.
Not sure if it's possible to do that easily, but it might be.
That's been done, in the form of magnets driven into holes punched in the pavement. Early systems just followed the magnets, but today, they'd just be used as an additional hint. It's quite possible that we'll see those in areas with heavy snow. Volvo has been testing this.[1] One of Volvo's arguments is that magnets are also useful for snowplow guidance. Heavy plows often chew up the infrastructure when snow is heavy enough that the road isn't visible. In some areas, poles are placed alongside roads so the plow drivers can tell where they're supposed to be plowing. Sometimes this works.
One possibility is that the cars would operate like drones. There would be a trained remote human operator who would take over if something went wrong.
As far as driving some inaccessible location, I'd guess the cars would be more or less like personal trams. Just people who take public transit now, if you needed to go down a dirt road, you'd walk or take a folding bicycle.
I don't think they can legally sell a car without a steering wheel. If they manage to make self driving cars legal at all they'll require someone in the driver seat.
Self driving cars are already legal many places. Nevada, Florida, California, and Michigan all have laws on the books that specifically address self driving cars. Nevada issues special license plates for them.
I never said they didn't, I was replying to "If they manage to make self driving cars legal at all."
They are already legal and they are coming, this isn't a far out concept. Semi self driving (requiring a driver to take over if needed) consumer cars probably will be first then fully automated ones a few years later.
This makes sense. GM is partnered with CMU for self-driving technology. VW, BMW, and Mercedes have in-house efforts. Ford has had an in-house effort, but needs better technology. Google doesn't want to build and operate auto plants; the margins are low and their stock would decline.
Chrysler's head of engineering once said "We will have steer by wire and brake by wire over my dead body".
It's interesting that you put GM's partnership with CMU at the same level as Ford and Google.
I don't think any major automaker has an in house effort anywhere close to Google. Most of them rely on suppliers like Bosch for their driverless car tech.
I think this shows that Ford is the only automaker that currently understands the shifts that the industry is about to undergo. None of the automakers or their suppliers can act like a software company in comparison to Google.
Disclosure: I work at GM; these are my opinions only; I don't work on SDCs, just very interested in them.
I don't think it's the tech (as such) that matters so much. I think the attitude, approach, and discussion matter much more.
For instance: Google has said, for now, we can't have drivers in the loop and call it a self-driving car (see Tesla etc videos[1]). That makes a lot of sense, but its also very limiting.
Airbus et al have not figured this one out (Air France 447[2]) - Perhaps automotive scale will provide more effective tools for managing attention and skill.
There are so many difficult non-technical discussions to overcome in this market - liability, user tracking (location history and habits), information leakage, failure modes (should I honk and flash lights if my human doesn't take control), aggressiveness settings (if the car is not going fast enough, the user will take control back), and many more, I'm sure.
Technical issues are going to be easy compared to the human issues. I think Google is addressing this by having an explicitly limited approach for now.
- Liability: the CEO of Volvo says that the manufacturer is liable if the vehicle crashes in autonomous mode. That's probably the way this will work out. That's a good thing; it means the manufacturer has every incentive to avoid crashes. Manufacturers will insure against this and build that into the price.
- User tracking: OnStar has that now. The next question is whether Big Google will permit you to opt out. Urmson, Google's head of automatic driving, says they don't need car to car communication. It's better to use sensor data. Lots of things on the road won't be equipped to talk, anyway.
- Information leakage: what's the question? Security, or privacy?
- Failure modes: there needs to be enough backup capability to get the vehicle stopped without a collision after any single component failure, and enough fault detection to detect any single component failure. It's probably sufficient to have two forward-facing radars with a backup computer monitoring the second one, dual actuators for steering and brakes, and backup throttle cutoff. The other side of failure is driving into a situation that the system can't handle. But that's usually a problem that appears slowly, as with heavy rain or fog, rather than all at once.
- Aggressiveness settings: this hasn't been a big problem for Google, other than having to edge into intersections in some situations. Self-driving cars have an edge in that they can look in all directions simultaneously, which means they can merge more precisely instead of more aggressively.
Smart of Ford to partner with Google on this IMO. Besides Tesla who seem to know what they're doing, I find it pretty strange that most(?) other auto companies seem to be basically going it alone on autonomous driving with an (understandable) complete lack of know-how when it comes to software (think, e.g., Toyota with its 80K or however many global variables).
I was listening to a radio program (KQED's Forum [1]) where they discussed the California DMV's new proposed regulations for self-driving cars, and from the point of view of someone who isn't insanely arrogant about humans' ability to safely operate cars, they seem quite draconian and luddite-ish. Google made a statement about these proposed regulations that echoed my feelings, but Audi was on the program defending the regulations and claiming anything approaching fully autonomous driving was a couple of decades away. This seems to indicate that they are at least a decade behind Google who claim they're already there or will be within ~5 years, so it sort of blows my mind that they wouldn't be trying hard to partner with them or another firm (perhaps Uber?) that actually understands software (and particularly AI).
The CA DMV is being reasonable about this. 11 companies have applied to test autonomous vehicles on California highways, and some of them may not work very well. After all, Tesla just shipped a beta version of automatic driving, and it wasn't very good. Tesla's system is just lane keeping plus automatic cruse control (NHTSA level 2) plus hype, minus a hands-on-wheel sensor. Read Road and Track's critique.[2]
California does have experience in this area. Caltrans had automatic driving demos in 1997, when Yang and Brin were still students at Stanford.[1]
As I've pointed out previously, NHTSA Level 2 systems are not good enough for hands-off driving, but, because they can handle simple situations like normal freeway driving, create the illusion that they are. This is the "deadly valley". Drivers are just not going to be paying close attention with the system engaged. It takes several seconds for a driver to detect trouble and take over, by which time it may be too late.
NHTSA Level 3 systems (roughly what Google has driving around Mountain View) have real situational awareness and a good driving record. What Google has now should be viewed as a floor for automatic driving competence. Anything less than that, and the system has to make sure the driver's hands are on the wheel.
On the liability front, Volvo's CEO says that the manufacturer must accept liability for accidents when driving automatically. That's probably the way this is going to go. Tesla would like to off-load that liability onto the driver, but that probably isn't the way this is going to go. The first self-driving cars will probably be leased on operating leases, so that insurance, maintenance, and vehicle cost are bundled together.
I'm actually pretty happy with the Tesla system, and it DOES ask you to hold the wheel in sharp turns, faded lane markings, etc. It complains loudly if you ignore it.
It's miles from full autonomy, of course, but for what it does it works quite well in my experience driving between Seattle and Whistler, BC in snow, rain, and heavy wind twice in the last month (I only received the autopilot update a few weeks ago). It has clear limits and it's quite aware of them in general, protectively requiring drivers to take over before it runs up against them.
I don't think Google has a decade ahead of anyone in self driving cars. The AI technology and datasets are shared pretty liberally and as soon as someone makes a breakthrough, everyone can replicate it. It's much closer to fashion than rocket design in that sense.
Google will be the first but I expect many to follow suit. Maybe if Google pulls another Android move and open-sources the driving tech, they could possibly gain 90% of the market share by making it less interesting to develop competing technologies.
But look at other AI fields - speech recognition, image recognition, natural language processing - they are in the open and all improvements quickly circulate.
I definitely recognize the large amount of progress that's being made in ML and AI recently (I'm a computer vision researcher in grad school) and appreciate the amount of it that's being done in the open (at least through papers, but also commonly with open source code). However I haven't really seen what's happened in vision/NLP/speech happening with recent research in self-driving cars. Google in particular has been relatively secretive about their work and conservative in publishing any technical details, especially compared with their ML research. In fact, I don't think they've published any conference/journal papers or tech reports about autonomous driving at all (but would be happy to be corrected on this). As I understand it, they essentially bought Stanford's entire team working on the problem 5-10 years ago and have kept them working in stealth mode at Google X on it since then, sharing nothing beyond vague public statements with no technical details. Uber made a similar move, gutting CMU's robotics department a year or so ago, and I'd be kind of shocked to see them go in the opposite direction from Google on this re: publishing.
I think, to a much greater extent than problems like computer vision or NLP, many of the breakthroughs and much of the difficulty in making self-driving cars work are all about painstaking engineering efforts to ensure that every base is covered than breakthroughs in AI, making it difficult for me to imagine how the knowledge could be shared short of, as you mentioned, an Android-like move where someone decides to open source essentially the entire project. Seems unlikely to me in this case, but you never know.
complete lack of know-how when it comes to software (think, e.g., Toyota with its 80K or however many global variables).
They might have more know-how than you realize. When you're writing safety-critical software, things like "dynamic memory allocation" and "local variables" become dangerous due to the risk of memory leaks and stack overflows.
You could be right, but somehow I'm skeptical that's really why they were using thousands of global variables. I'd certainly be interested in hearing any more educated takes on Toyota's use of global variables specifically or the auto industry's capabilities in software more generally; I have no first-hand knowledge and am making a lot of assumptions (hence the throwaway, downvote away ;) based on generalizations I've heard about non-software-focused companies that end up having to write some code for their physical products.
"Local variables" can be allocated to global addresses by the compiler/linker toolchain, barring recursive functions. Runtime code loading may require a bit of extra care, but I wonder if safety-critical software really needs DLLs.
I read remarks by Sergio Marchionne (CEO FCA - Fiat Chrysler) where he said that drivers enjoy driving and have a spiritual connection with their car (as I recall, I can't find the article).
Its quite odd to read that. A LOT of people drive cars they hate, in traffic they hate, and would much rather hand over control.
Audi may not think that autonomous vehicles are technologically decades away; rather, they may be trying to support that narrative so others do hold the belief, and subsequently sway legislators and the public into an over-cautious position.
Exactly -- it sounded to me like they think fully autonomous vehicles are decades away from being created at Audi, and hence would support legislation that prevents them from getting scooped by other companies with the tech chops to make it happen decades earlier.
What I'm saying is that even if they truly believe they can to those vehicles at roughly the same time (which I think they might), I don't think they would want to. Life's good for Audi, so any sea change (even if they are a part of it) carries more risk than reward.
As an employee of another car company all I can say is:
Damn!
Similiar to Tesla, Apple and Google are quite far away from the degree of industrialisiation the automotive industrie has today. Without this you can't build millions of cars cost effectivly. Also, cars are immensely complex in all the small things that need to be engineered deligently. So, that move makes greatest sense.
Guys, this will be coming sooner than we think! I know a lot of other schools and companies are doing research towards this. I shared this one because I feel like it's most likely to happen the soonest.