Hacker News new | comments | show | ask | jobs | submit login
Google: Self-driving cars in 3-5 years. Feds: Not so fast (extremetech.com)
64 points by lispython 1261 days ago | hide | past | web | 94 comments | favorite



If anyone is interested in some of the algorithms behind this technology, then I highly recommend Sebastian Thrun's "Artificial Intelligence for Robotics" course on Udacity [1]. He makes unintuitive probabilistic methods easily understandable. I'm not aware of a better source for getting a basic understanding of complex methods like Kalman filters and particle filters [2][3]. I had never even heard of a "histogram filter" until recently reviewing this material, and it's a perfect solution for a problem that I currently have.

[1] https://www.udacity.com/course/cs373

[2] http://en.wikipedia.org/wiki/Kalman_filter

[3] http://en.wikipedia.org/wiki/Particle_filter


I would also recommend anything by Sebastian Thrun on the topic of autonomous cars. His Udacity course is a great starting point, and it inspired me to do my graduate thesis on an autonomous car simulation.

If anyone is looking to get a little deeper, check out some of his literature during his time at Stanford:

http://robots.stanford.edu/papers/junior08.pdf

http://www.lcad.inf.ufes.br/wiki/images/7/7f/2010_-_Dolgov_e...

http://ai.stanford.edu/~ddolgov/papers/dolgov_gpp_stair08.pd...

Also, if anyone is interested, my thesis: https://github.com/mattbradley/AutonomousCar


I was surprised by how understandable the concepts are behind this stuff. I second this recommendation. They are really cool algorithms.

For real life, there is obviously a huge engineering component, but you can definitely see how the ideas could come together to build something like the automated cars.


Exactly! I linked to Wikipedia for contrast with how unintelligibly these topics are often presented for we mere laymen.


I think the best way to roll out self-driving cars would be to build up a base product (like Google has done), and then open source at least a large portion of the software. Run a campaign that anyone reporting a critical bug gets their name on the underside of the hood in every first generation car that Lexus/Honda/Toyota and Google collaborate on. Then, run an ad campaign based on X thousands of programmers reviewing code totaling X thousand days of effort. Easily the safest strategy, and it gives an easy response to any competitor: how many programmers have reviewed THEIR software?


Sure, and give away prototypes to test the software.


I can't wait till the day I can have my self driving car drive me to work while I read a book and when it drops me off it goes to recharge it's battery, pick up my family from work or school and maybe when it's not in use I can rent it out as a cab.


I can't wait until the day I don't have to drive into work because they woke up and realized I could telework/teleconference/telelive.

Then the miles I put on my car are from my use for my pleasure.


The economics won't support it being your car for long, at that point.

When you can just ask for a car to pick you up and one will arrive, corporate robo-taxi fleets will have economies of scale over family cars sent off to earn their keep. With ordinary people not needing to buy a car, facilities for maintaining and supplying them will dry up too, as the manufacturing effort shifts to taxi-ready cars for fleets. And the price will shift up out of your range to reflect the earning potential.

The robo-car will end the private car as anything but a hobby comparable to the classic car.


Doubtful. In Sf, there are several companies running into problems scaling up to this type of model (ZipCar, Uber, Scoot). Often there aren't enough cars/scooters to meet demand, or the wait for one to arrive will be too long. People who value their time will always be willing to buy one of their own so they will be guaranteed a ride where and when they want it. Not to mention the cost of maintaining such a fleet.

It's also worth noting this type of service is only effective in dense urban centers. Enough people live in the country/rural/suburban areas where owning a car is necessary.

Americans spend a ridiculous amount of money on cars as status symbols. It'll be a long, long time before that ceases to be the case.

Maybe you meant the robo-car will end the private car in cities where its citizens focus on different status symbols? I can see that being the case in SF and maybe NY in the near future, but not anywhere else in the US.


"there aren't enough cars/scooters to meet demand" because nowadays, a car needs a driver who isn't busy doing anything else. They will be far less rare when unoccupied cars can go in search of fares.


Zipcar and Scoot aren't cab companies. The cars sit without drivers until somebody reserves one online and picks it up. The only difference here is the car can pick you up and drive you instead of having to drive it yourself. Self driving cars don't magically solve every problem I've listed. All they mean is you don't have to drive yourself and it can park itself far away. More efficient traffic flows? Absolutely! Less traffic congestion? You bet! The end of the personal car? not by a long shot.


Definitely this. The average car sits in the drive way 22 hours a day. It may not seem like it at first, but an all-taxi economy will be much cheaper for the riders (think about paying for 1 millionth the cost of maintaining 1 million cars at once, as opposed to 100% of the cost of maintaining one car).


Agree with your point. It raises another question for me - if we are talking about shared or non-owned vehicles that people interchangably call and use when they need to go somewhere, aren't we inching closer to the same experience that other means of public transportation provide? And shouldn't we give those equal consideration to a world that is still built for and dominated by cars? For all the benefits we are all citing here, there are still a lot of problems with a world dominated by cars (pollution, the space they take up, the roads they require that cut up communities and are inhospitable to pedestrians / other modes of transportation, etc.)


Space waste happens because(1) ego, which means buying big and snarly cars to look like an action hero, (2) over-provision of car space, because you sit in one seat of a five-seat car on the off chance you might have passengers or cargo (3) cars parked, idle.

Robo-taxis have an economic attractor at the "runs forever, gets good mileage" end of the design scale, because energy and maintenance are the biggest ongoing costs. There would be incentives to make them small and efficient, maybe sized for one occupant. And they would not sit around idle, so they would need a long MTBF, which rules out designing for speed or beefiness and suggests electric (it's got fewer moving parts).

All of which means, fewer, smaller, cleaner, slower, simpler, vehicles. Aesthetes will pine for the days of muscle cars. In terms of green-ness, they may come to eclipse buses etc, while retaining the advantages of cars (goes where you want, when you want, and is not full of strangers).


Except no company is going to want to buy enough cars to handle the demand during rush hour.


Given those cars make two reliable paying journeys per day, minimum, why wouldn't they?


Because two is not enough.


3D printers, build and reclaim fleetcars on the fly?


My first thought was "man, I could go to all the pubs without a driver"


So are you the driver of the vehicle or not? Under current law, you're the driver whether your hands are on the steering wheel or not. Indeed, people have been convicted of DUI while sleeping in the back seat of a parked car.

This is sort of the whole point: everyone thinks they can eat their cake and have it too.

DUI? I'm not the driver, the car is! Hic!

Run over a kindergartener? I'm not the driver, blame Google!

Auto insurance rates go through the ceiling because of untested technology? Wait, I'm the same driver I ever was, this is unfair!

Can't get insurance at all? Help, the Man is a luddite! I'm being repressed!


The computer is a better driver than humans are.

So, "The Man" in this case is a luddite, and making laws that require humans to be in control of the vehicle actually decrease public safety.


I don't think the car is a better driver than people are, but I think it will be soon. And once it is we should let people sleep, be drunk, or do whatever while their car drives them.


Yeah, i was thinking this too. While self driving cars will probably be a killer for some industries... its a huge boost to others, such as bars.


Move to city with public transit and live your dream.


There's a big difference between being driven around in a cab and walking to a bus/subway station, waiting for the service and sharing a commute with strangers.

I don't mind public transit but I'd much rather be driven around by a cab.


It's pretty easy to do the math and see that once population exceeds a certain density, the idea that all humans can move around on the surface of the ground in 9' x 14' vehicles is no longer feasible. They simply don't fit. We are past that density in all major cities.


That point often seems to be left completely out of the discussion about self-driving cars. Is making a 9x14' box an easier and more appealing mode of transportation really the best move for society? It's at least worth considering.


This is the harm-reduction debate. For example, Swedish snus and electronic cigarettes are much less harmful than smoking, but the argument goes that quitting nicotine altogether is less harmful than anything. Yet people still smoke and use nicotine, so switching the stalwarts over to less-harmful methods of nicotine consumption would save lots of lives and money.

Likewise, it may not be possible in the short term to convince everyone to stop driving cars. Self-driving cars won't fix the long-term problem of efficient transportation, urban sprawl, or emissions, but they can reduce its harm by efficiently providing car-like service to the public.


Robo-cars in taxi mode won't stop, they'll go from one job to the next, and they won't park except at times of low demand. Unlike drive-it-yourself cars, they won't eat up urban space for parking lots, and the total car fleet will be much smaller because the allocation only has to be "one per currently travelling commuter", more or less, rather than "one per human".


So just like taxis, then?


With no driver and a quarter the size or less for smaller (1-seat) units.


And instead of underground trains, there could be underground highways that automatic vehicles use.


I agree it would be nicer than mass transit, but its not sustainable at all to go without public transit and having people be physically closer to their needs.


I'm not completely sure, the costs structure would likely be very different but imagine fleets of small (1-seater and 2-seater) automatic electric cars, call one on your smartphone specifying the route (for your job it'd probably be a recurring route), it arrives in front of your door, climb in, it leaves you at your job and goes back to hibernation mode (or picks up somebody else in the area)


I love the idea of smaller, more efficient vehicles, but I think we hold too much hope in the fact that they could be "self driving". Why would a smaller self-driving car be more likely to gain in popularity over what most people choose to drive today? Why is it more likely to be electric? Those are two independent characteristics that haven't taken off on their own; I don't see how the self-driving aspect is going to change many of the existing problems we have with car-based transportation.


One of the great things about a fleet like this would be that you could change your car depending on where you were going and what you were doing. For example you take a 1-seater to get around a city but if you go a longer drive you take a larger, more powerful car. This is one of the things I enjoy about using zipcar, you get to drive fancy cars some days but only pay for the cheaper ones most of the time.

The great thing about an automated fleet is that you could actually switch cars during the journey. Take a small car to leave the busy urban areas and than switch to a larger car when you get to the motorway.


Absolutely. An alternative for longer journeys would be multiple small "pods" docking together as a bigger unit for increased efficiency, or boarding some sort of platform for long-range travel.


Yes, nothing quite as rewarding as waiting for a bus in the cold only to have it drive straight on by because it was full.


It's only a dream when you don't have more than 1 connection, and even then you do have off-peak issues. Mass transit needs a lot more work even in the NYC metro area, for example.


> maybe when it's not in use I can rent it out as a cab

Hey, there's a start-up idea: Buy a fleet of these and deploy them around the Bay as the Robot Cab Company. Have them queue at designated points or hailed via smartphone. I expect to see this at Y-combinator in 3-5 years.

Now, if we could only wedge a social networking component into it...


"Speed"-dating ?


Why bother with all that trouble of renting it out as a cab? Instead, be the person who rents the cab.


This is the right answer. Requiring most American adults to own cars that spend 90% of their time parked is enormously wasteful, both directly (expenditures on cars, maintenance, etc.) and indirectly (that we, societally, have to provide three times as much street space as each car will occupy at any one time: home parking space, rush-hour road space, and workplace parking space). The era of everyone having their own car should end.


If I had kids I would like to know where they are. Maybe be able to do voice chat with the while they are in the car. Also, I imagine that cars will become media and productivity hubs like our smart phones or tablets. Cleanliness is also a factor.


Another good article today on self-driving cars.

http://www.wired.com/autopia/2013/02/continental-autonomous-...

“I drove to West Virginia in this car,” he recalls. “It’s about a nine-hour drive from Detroit. Out of that entire nine hours, I drove for 45 minutes. And the only reason I drove that much was because there was this nice mountain pass, with sharp S-turns.”


I can appreciate that there are a lot of benefits to self-driving cars - safety, the possibility of reduced traffic and congestion, amongst other things. That being said, I'm always stunned at the complete lack of discussion about the topic of environmental impact (whether the impact of this technology will ultimately be positive or negative.)

If we are going to undertake such a major shift in how we all get from point A to point B, shouldn't that be a significant point of consideration? It is rarely even mentioned in the context of this discussion, which I find to be disappointing.


I actually have read discussions of the such, but unfortunately don't remember where... :(

However, that being said, here are some of the points I've seen:

Pros: Car sharing is easier, since you can pull out a smartphone and "order" a car to come pick you up at any point, there is less of a need to actually own a car. This should lessen the total number of vehicles manufactured/maintained/etc. The huge amounts of energy for making steel come to mind here. Also, if you are using a CaaS(car as a service) many would probably opt for a cheaper option where the car will stop and pick up other passengers on the way, offsetting the cost and therefore reducing price. A person doesn't have to arrange a carpool this way. If you make it easier and cheaper to carpool than to not (currently it is cheaper, but a hassle) then people will do it more.

Reduced traffic congestion: This sets the stage for the other benefits. Cars stopping and accelerating less equals less fuel used. Many, many more autonomous vehicles can share a roadway without reducing speed than human drivers.

Cons:

More suburbinazation/low population density: This is a big possibility. While living in a distant exurb would mean you probably have to own an autonomous vehicle, or schedule one to pick you up well in advance, car sharing suddenly works less.... but a 90 minute commute today would be much faster with autoomous vehicle roads, and people can read/chill/watch news/surf the net etc while cruising to work. Commuting is made more relaxing, so more people do it....

It's a very complex issue to try and understand the implications, but I really think that it would be an environmental net gain, due to the fact that, if everyone is using Car as a Service, then the CaaS providers will have cost minimization as a priority, and work to constantly increase efficiency. People only care about fast acceleration and performance when they are driving themselves. Computers don't have small penises to compensate for.


I can't see how it wouldn't be an improvement. Self-driving vehicles that automatically assess traffic conditions, reduce accidents will likely result in less gridlock, improved travel times, less wear on roads and other infrastructure etc.

I also don't see why this tech couldn't be used in public transit/shipping thereby reducing one of the largest cost factors (salaried drivers, poor lifestyle) and actually increase the amount of available public transit.


The key thing is that the government is not going to (and should not) get very deep into regulating/legalizing this tech until it actually works, which is fine. 2020 does seem more realistic for a mass market roll-out.

So long as it's ready by the time I'm too old to safely drive, I'll be happy.


As long as the car can pass the tests set out for humans, there really shouldn't be need for government involvement. We already have defined standards that we find acceptable. It is not like humans are perfect and the machine has to match that, though I do hope that the machines can exceed those existing expectations.


We've defined standards for humans that we find acceptable, in part because there is much that is implicit in the operator being a human. For example, a cop at an intersection, holding his hand up, then pointing left to redirect traffic is not something we test in a driver's test, but is something we expect human drivers to understand and obey. Perhaps the cars can handle this already, but there are likely hundreds, if not thousands, of these scenarios that are going to be necessary to think about now.

There is also a lot of legal work that will begin as these roll out, especially dealing with liability, and these things take time that simply selling millions of these cars in 3-4 years will not allow for.


Its probably a little more difficult than making the car take what we know as a driving test but it could probably be boiled down to some reasonable number of city and highway miles in various conditions (hard rain, snow, fog) and a small number of situations to make sure it doesn't run down pedestrians and deer at random. In that respect, the endpoints of a journey would likely be harder to deal with than anything else (ie. driveways). When it comes down to it though, humans are an amazingly low bar so while it would be nice to have perfect cars, it wouldn't be hard to save a few lives.


One thing that always comes to mind, and this may just be the pessimist in me, but remember the "flash crash"? Where a bunch of trading algorithms triggered short sells as a reaction to other trading algorithms triggering short sells?

Well think of that, but with cars, with people in them...

Edit: I think it's being misconstrued that I am somehow against Google Cars, when in fact I'm very much for them. Regardless of the likelihood of the aforementioned scenario, I agree that it's still far better than the wildly unpredictable human factor. And ultimately I think that self-driven cars will be a boon both for road safety as well as fuel economy and overall emissions. (Not to mention traffic, I can't wait for a world when traffic is basically non-existent)

One thing I realized after making this comment too, is that road situations are far easier to predict than the randomness of the market, and the consequences are much higher than 0's in a bank account, so I'm sure there will be fail-safes.


The only thing those two have in common is computer code. It's a poor analogy at best.

When it comes to making accurate split-second decisions, I'll take an algorithm over a person any day of the week.

Also, garbage in garbage out. et. al.


"When it comes to making accurate split-second decisions, I'll take an algorithm over a person any day of the week."

You're assuming too much. Just because it's an algorithm, doesn't mean it'll know what to do in every situation. It'll take a lot of work to develop a set of algorithms that can handle all the events that confront a driver on a regular bases.

Don't get me wrong, I would be comfortable being driven by a computer but not just yet. Before I say "I'll take an algorithm over a person any day of the week." I want to make sure that algorithm is tested and works well.


By the flash crash, you mean that market event where something went wrong for 20 minutes (due to a human error), but the system recovered all by itself before the end of the day (thanks primarily to electronic systems)?

http://www.chrisstucchio.com/blog/2012/flash_crash_flash_in_...

That event which was exciting if you were an HFT, but which you can't actually find in a daily stock chart?

I'm hoping that self driving cars will be as robust as our electronic trading systems.


That is an awful comparison. A bunch of trading algorithms designed to compete and get an edge on each other. Google's cars aren't programmed to race everything in their path, they're designed to be as conservative as possible.


That wouldn't happen because they aren't networked together for google's car. You are probably imagining that each google self driving car sends it's driving data to the cars around it and that is how they avoid accidents. That's not how it works though (although it would probably be useful in the future). The way it works is that the car has a giant radar attached to the top and it maps out a real time 3d map of it's surroundings which is uploaded to google's servers to update the world map in real time.

edit: Also to add, the reason that flash crashes happen is because the algorithms are unaware of what the other are doing (although it can try to guess). Also, self driving cars are designed to follow laws not beat the competition.


Well I think I have it right, while the trading algorithms are networked, they essentially operate independently of one another. So just how a trading algorithm reacts to market movements, the radar on top of the car would react to road conditions and other drivers.

I couldn't even imagine them all networked, although it may actually be better, it seems like even more danger.


That actually isn't a problem for the radar because it has 360 degree awareness. So if a car veers into your lane and you are wedged between two cars, instead of veering to the right the self driving car would slow down. So basically your argument is actually an argument FOR self driving cars. I imagine that if a self driving car is put in a situation where it cannot avoid an accident at all costs it could even manipulate the car to minimize the amount of damage that is caused.


> ...basically your argument is actually an argument FOR self driving cars.

Hah, yeah it kind of ended up being that way didn't it? :) Even though I don't feel I was arguing against them in the first place.


Assuming the cars are designed to practice basic defensive driving (especially keeping a safe distance), I don't really see a realistic scenario where this could happen.

Far more likely is a human driver screwing things up.


One issue with keeping a safe distance is that doing so means you travel slower than nearly everyone around you on the highway, since the space you leave in front of you is viewed as an invitation to merge ahead of you, meaning you have to slow a bit to open to a safe distance again, which prompts someone else to jump into the "empty" space...

However, I suppose a safe distance for an autocar should be considerably smaller than for a human-driven car, since the start of braking would be essentially instantaneous.


As the flash crash showed, if you screw up enough in the market, they'll just hit the reset button and undo the trades.

The same isn't true for cars, and that'll likely affect the code testing process.

Plus, if you're going to reference the flash crash, I'll point out that people, not computers, caused the Great Depression.


That's a good point. For every Flash Crash we've seen, how many human-error market crashes have there been? For every algorithmic auto crash that could ever happen, how many human-error auto crashes have there already been? Just 24 hours ago I had to go pick up my girlfriend from the side of the road after someone swerved into her car after not looking in his mirrors. The complaint that algorithms might cause an unintended car crash so we should continue to only have people behind the wheel rings a bit hollow when people-driven cars is one of the biggest killers in Western society.


...people, not computers, caused the Great Depression.

Not to mention the flash crash...

http://www.sec.gov/news/studies/2010/marketevents-report.pdf


Actually, I'd say that the way people currently drive, at least from the mid-atlantic to the north east, is more likely to cause a "flash" crash than a non-emotional computer algorithm. (I'm looking at you, CT drivers, and the 95/287/turnpike/GSP area)


So you'll have a crash once a year. Big deal. At least a thousand other crashes prone to human error would be prevented.


I'm going to disagree with everyone explaining how a flash crash isn't a valid analogy here. It actually is, although the word "crash" in this case is semantically overloaded and it does not mean that a true automotive crash is the likely result.

Both the market and the road have a number of autonomous agents interacting with each other under some rule set. The nature of the players has an enourmous impact on how the game is played. A set of humans is physically incapable of "flash crashing" a stock market because they are literally physically incapable of trading fast enough for that to happen. The introduction of other effectively-autonomous agents into the market changes the nature of what the collection considered as a whole can and does do.

It is true that introducing computer-controlled cars onto the road in quantity will almost certainly qualitatively change the nature of driving on the road, and it is valid to be curious or even concerned about what this effect may be. It would be particularly bad to imagine that all the cars are running the exact same code; it is a completely valid concern that one particular bug could be trigger which could cause mass failure of some type, including true automotive crashes, but possibly also just being software crashes. If you dig into real automotive code, you'll find similar things that have happened in real code.

Long term, I think it is likely to be a net positive effect. You'll have a lot more drivers on the road taking what will probably be a very conservative approach to driving, with much more careful management and maintenance of margin for error, including in situations where humans tend to play fast and loose without even realizing it. It seems likely to me that computer cars will eventually refuse to drive in certain bad conditions, like icy roads, and that over time we will consider that to be an acceptable reason to not drive. But I do think we also want to be careful to ensure that there are many implementations of self-driving cars; monoculture has too significant a chance of a Black Swan event. But it isn't impossible we'll pass through a period where the net result is a bit more dubious.

There's a lot of corner cases to work out, and that includes things that today we wouldn't even consider. Suppose 99% of the cars on the road are computer controlled. What do they do when a teenager hops on an overpass and starts throwing paint balloons? What happens when teenagers start jumping into highways on a dare? As the system becomes a computer system rather than a human system, we must also consider how it will be attacked not only by the real world, but by humans as well. Google's really being too optimistic here. They've made enormous strides, truly enormous strides, and now we're seriously talking about them as a thing that may happen for real, rather than the ever-nebulous "someday", and that's big. But they've got a long way to go before we can truly put them in the hands of the public.


A set of humans is physically incapable of "flash crashing" a stock market

False. They did it in 1962.

http://online.wsj.com/article/SB1000142405274870395760457527...


IMHO, they call that a flash crash because by 1962 standards, it was. But I'd say there was some significant qualitative differences. The Wikipedia article on the 2010 crash [1], for instance, talks about things like "At 2:45:28 pm, trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange ('CME') Stop Logic Functionality was triggered in order to prevent a cascade of further price declines." Emphasis mine. Markets have always been able to crash quickly, but without computers you're not going to get phrases like "paused for five seconds" to mean anything.

[1]: http://en.wikipedia.org/wiki/2010_Flash_Crash


This technology has the potential to save tens of thousands of lives each year. How can anyone possibly argue that the technology shouldn't be on the roads until it's 100% safe (which will probably never be possible)?

So, it's okay for tens of thousands of human drivers to be killed while driving, but inconceivable for tens of people to die at the hands of a computer?


> So, it's okay for tens of thousands of human drivers to be killed while driving, but inconceivable for tens of people to die at the hands of a computer?

I think you summed it up nicely. That's the way the general public will react.


Imagine that some crazy schizophrenic-paranoid-whatnot-wacko manage to create a botnet made of 100 000 autonomous cars and decide to make them accelerate simultaneously and crash into each other...

Note that we're living in a world when hardly a day passes by without some major security exploit being discovered and hardly a day passes by without hearing about some foreign goverment taking control of domestic computers. What if Iran manages to gain control of all the autonomous vehicles? Isn't that worth a thousand atomic bomb?

And, yes, there's a psychological problem when there's no human to blame but just a computer or a software (and, actually, ultimately the entire chain up to politician that allowed such car to be used on the road and software programmers to write that software). It's alienating to society as a whole and it's a real issue. People are trying to find solution to deal with this problem.


That assumes that the cars have some mechanism to wirelessly communicate over very large distances. Are you sure these cars are being designed with mesh-style communications?


Release early, release often? Wonder if that'll work for driving software. Chrome Version 24.0.1312.57 here.


I don't see how it could be any worse than some people's driving already. Compare the failure rate of embedded devices, software and servo hardware and that of a tired human being coming home after a long day's work and I'd take the "smart car" over the exhausted human any day.


"We expect to release the technology in the next five years" It's extremely unlikely that it will be the entire car, more likely backup/safety features, at most the self parking like we say at ces at least until there is enough data to support their expansion.


Ah, HN's double standards...

I mean: it's amazing to see all these people saying, on one side, "It's normal that there are hundreds of millions of zombies part of gigantic botnets, we'll never have 100% secure software" and, on the other side, these very same people saying: "Can't wait to have an autonomous car in 3 years / 15 years".

So what is it? Insecure software or safe cars?

So are there going be theorem provers used to proove that the OSes in these cars are bulletproof? And the compilers of course shall have been prooved secure too?

If not it means there are software security issues right? Someone commanding one million zombie PC is not cool. Someone commanding one million zombie autonomous car is downright scary.


I don't think that there is any double standard here. I think most of us who have driven in most countries have noticed that there are some truly incompetent human drivers on the road who raise risk for everyone. It's enough for self-driving cars to reduce the risk of traveling from place to place. If I must travel, why not let the machine do the driving, while I think about my work with full attention or engage in undistracted conversation with fellow passengers? You are of course entitled to travel only by means like commercial airlines or railroad lines that will not be self-driving as soon as some cars will. I think the regulators of highway traffic in each country will make reasonable trade-offs about the timing and conditions under which self-driving cars will be allowed on the roads I travel. I look forward to that day, having seen how local human drivers drive.


I hope my car doesn't auto-run email attachments or let me browse and download from obviously dodgy websites.


You mean like domestic broadband routers?

How many of these are part of botnets? A huge lot.

Software ain't secure yet. I'm not saying it can't be. But we're simply not there yet.


This sounds like the plot for an excellent sci fi horror flick.


On a second thought I'm thinking about something: everybody here seems so convinced that AI is so advanced and computer software so secure that the benefits here far outweight the drawbacks (including the loss of more and more individual liberty at the benefit of giga-corporations)...

So what if there are actually zero dead? The car so good, with a feedback loop so fast that you simply can't have accidents anymore...

Will the speed limits be then raised to 100mph in the street and 200mph or more on the highway?


> the loss of more and more individual liberty

wait, what?

> giga-corporations

... wat. Google is at most a kilo-corp.


There are already today software failures in cars which created serious problems: like drive-by-wire cars who wouldn't stop accelerating.

As of now the problem is mitigated by having someone behind the wheel. Even when the motor shutdowns and when you lose assistance, you can still turn the steering wheel, brake and/or use the handbrake. This has saved many lifes.

But now how often do OS / software have bugs? Security issues? I can't imagine for a second that these cars are going to be fully autonomous and not connected to a network. How can we possibly make that network safe?

Lately on HN the consensus seemed to be: "Every software has bugs, every software can be exploited if the attacker spends enough time".

So is the software in these cars going to be safe? Safe from ill-intentioned exploits, safe from bugs potentially threatening lives?

Now if there are less dead then what we typically have in a year, ok. You'll have to check the numbers.

But there are going to be deads. That is just a fact.

And the public reaction when there's going be a dead due to a "computer driving" is going to be terrible.

I mean: it's amazing to see all these people saying, on one side, "It's normal that there are hundreds of millions of zombies part of gigantic botnets, we'll never have 100% secure software" and, on the other side, these very same people saying: "Can't wait to have an autonomous car in 3 years / 15 years".

So what is it? Insecure software or safe cars?


The real question is why the machines discriminate against old people:

http://www.theatlantic.com/business/archive/2010/03/how-real...

According to the NTSB, unintended acceleration was not caused by electronic errors. It was caused by "pedal misapplication" and mechanical failures (gas gets stuck under the floor mat).

https://en.wikipedia.org/wiki/2009%962011_Toyota_vehicle_rec...


Insecure software or safe cars?

Your implication being that there is such a thing as a safe car. I think that people are excited for self-driving cars on the basis that they will kill fewer people on average than human drivers do.

It's a slightly depressing way to think about things, but it's realistic. However, the media hysteria that will surround the first self-driving car accident/killing will be critical.


I'm curious where or with who the buck stops. If my driverless car causes an accident who is to blame? It couldn't have been me as I gave it full control and was asleep in the back. Is it the developer of the software or the builder of the car? Maybe a sensor failed and thus the finger is pointed at me but I point it at the dealer who service the car. Given that the people who would most benefit from this tech are the same people who are least likely to give a shit about the upkeep of their car we are going to have a few years of interesting news headlines.


Can it pass the same driving test as every person passes? If so, why would it be considered less capable than a human who passes the same test?

I mean, realistically that is the only barrier between a human being and driving. Jay walkers get hit by humans. Cars running red lights get hit by humans. Sometimes they don't even brake because they were looking in their mirrors or fiddling with the radio.

The risk of your car being hacked is the one area that I find really scary. It can already happen to an extent - but this could be a lot lot worse.


You're assuming that people driving cars is safe. The question is which is less dangerous - an autonomous car or a normal car.


Humans fail too. If the software is not perfect but still safer than a person, you still come out ahead.


“We expect to release the technology in the next five years. In what form it gets released is still to be determined.”

You're talking, what, another 2-5 years of testing and adaptation from car manufacturers, if they choose to go with Google? Almost all of them have their own projects and, IMO, are unlikely to be tied to Google and their likely demands.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: