Hacker News new | more | comments | ask | show | jobs | submit login
Self-driving Volvos cover 200km of busy Spanish motorway (reghardware.com)
127 points by tomgallard on May 28, 2012 | hide | past | web | favorite | 55 comments



Self driving? Only if such a word can be applied to a laser beacon and wireless based follow-the-leader link. What happens if your radio link encounters interference? What happens if another car cuts in front of a robot car?

Obviously, all these self driven car stories are still ages behind google's AI car tech. I suppose it's unfair that google's car has spoiled the relative excitement of all these other stories, because they seem extraordinarily boring in comparison.


Genuinely curious: what is it that makes so many people have such high regards for Google's self-driving AI? I agree that it's a whole different ballgame compared to just tracking a lead vehicle on the motorway, but honestly, all I have seen from the Google self-driving cars have basically been hyped-up success stories and demo video's in extremely controlled and scripted environments. Nobody ever shows that video from less than a year ago where a Google AI car crashes into another car on their test track.

Yes, it appears the Google cars can drive around town 'autonomously' to some degree, yes, Google has facts to back up the number of miles travelled 'autonomously', but you never, ever get any facts from Google about how many times the human driver still had to intervene, how many times they had the software or hardware fail, how many situations the cars encountered that they could not handle.

To me, self-driving car achievements are still mostly PR. A car that can really, actually drive autonomously safely, anywhere, anytime, without any need for the drive to pay any attention at all, without requiring adapting all infrastructure, it's still a pipe dream and I think it will always be. Road infrastructure and traffic (especially city traffic with bikes and pedestrians) is simply too irregular and unpredictable to handle reliably by any form of AI known to man.

Personally, I think humanity will find a better way of transportation before the self-driving car will ever become a reality, as in: I can walk into a car dealership, buy a self-driving car, and have it drive me home safely while I read a newspaper, no matter where I live.


I have not heard of a Google AI Car crashing into another car. I have heard that one of the Google AI Cars was involved in a small crash (the other car crashed into the Google Car) while it was driven manually just outside Google's headquarters. Is that the incident you are referring to?


No, I didn't even know about that crash, reading the reports it apparently involved 4 cars, and the AI Prius running into the back of the car in front of it. Obviously, Google declared afterwards that it was driven by a human at that time, and that the crash was due to human error. What else would you expect them to do? There's no way to verify Google's statement, and why would anyone want to anyway, blaming it on one of the Google employees that are still required to be in the Google vehicles works out much better for every party involved in such an incident, and for Google in particular.

The incident I was referring to was a video of a Google AI car doing laps on a small test track that they set up on a parking lot. The car basically missed a turn and left the track, hit the brakes and skid into a car parked next to the track. Somehow (coincidentally?) the video is nowhere to be found using Google, but I remember watching it like it was yesterday. You could see 2 Google employees chatting a bit next to the track until the car crashed in the background and they got pretty upset.

Just to make myself clear: I'm not trying to talk down the great achievements Google has made with their driverless cars, I'm just a little sceptic about how far they actually are if you strip down all the PR coming from Google.


> There's no way to verify Google's statement

This seems absurdly paranoid. Why is it, on the face of things, more likely that the LIDAR-packing robot which is scanning 360 degrees hundreds of times per second got into a fender-bender than that the human engineer who sometimes has to drive that same car did? What reason do you think Google has to lie about this to the public-- considering that if it were the result of a software bug, and that bug made it to production, and somebody died, that little white lie could easily be worth millions of dollars of liability and you better believe that Google PR knows it?

> Google AI car doing laps

And this is even sillier. You use a track precisely because you are not confident that the car will not go off the road, which is very likely to happen while you are building a robotic car. Allowing the car into traffic indicates that they are confident.

This project is trying to alter the future of transportation. The self-driving car could wind up being the most valuable property of Google's. The technological challenges were difficult; the legal and political ones will be Sisyphean. They must make their case to lawyers and legislators solidly in the pocket of the industries they plan to destroy.

Why on Earth do you think the fickle opinion of the tech-press-consuming public matters to them at all?


> This seems absurdly paranoid. Why is it, on the face of things, more likely that the LIDAR-packing robot which is scanning 360 degrees hundreds of times per second got into a fender-bender than that the human engineer who sometimes has to drive that same car did?

For all I know there could be a million other reasons why it crashed, none of them related to the hardware. Maybe it was a combination of a software problem and failure of the human driver to intervene properly. Maybe the sensors picked up something that confused the software thinking the car ahead was still moving. Maybe there was some kind of hardware failure. Maybe the AI simply never had to do the kind of unexpected emergency stop that caused this incident and the minimum stopping distance wasn't programmed properly for the road conditions at the time of the incident. Or, maybe, it actually was fully operated by a human. Neither of us knows for sure.

> What reason do you think Google has to lie about this to the public-- considering that if it were the result of a software bug, and that bug made it to production, and somebody died, that little white lie could easily be worth millions of dollars of liability and you better believe that Google PR knows it?

You don't really have to ask that question do you? Google has already spent millions upon millions on this technology, and probably had to pull out all the stops to lobby for the legislation changes they need for their tests. Incidents like this could instantly kill their ambitions and severely hurt the credibility of this technology. There basically is no risk blaming the driver, who was, in fact, in the car and behind the wheel when it happened. Current legislation actually demands there is a human driver behind the wheel for exactly this purpose: to be able to assign responsibility for what the car does to someone in case of an accident. Legally, the 'driver' likely even was accountable for the incident, even if he wasn't driving the car at all.

Nobody is served by saying it was the computer: not the owners of the other cars, not law enforcement, not the government that allowed these cars to drive around town, and definitely not Google.

> And this is even sillier. You use a track precisely because you are not confident that the car will not go off the road, which is very likely to happen while you are building a robotic car.

I brought the other incident up just to make the point that you don't hear anything when the Google cars fail, just success stories that don't include any details about failures or incidents. You don't assume the Google cars are perfect yet, right? So if Google is so open and honest about everything, like you seem to assume, why don't they tell us how often the cars require human intervention, or what kind of situations are still a problem for the AI?

> Allowing the car into traffic indicates that they are confident.

You realize that none of these cars actually goes on the road without a human driver ready to step in when it fails, right? And that all the routes the cars drive are carefully selected and likely full of pre-programmed and scripted details?

This not to say the technology is worthless just because Google is still learning, but it's statements like yours that nicely show what's so strange about this driverless cars discussion. Just because Google is confident to have AI cars with people behind the wheel driving through town, you are confident that you have any insight at all about how those cars would actually do without a human backup driver behind the wheel. Unless you are a Google employee working on these cars, you know nothing more than what Google wants you to know, and that most likely will not include all the possible points of failure of this technology.


I don't expect Google to be completely open, I just don't expect them to lie for no reason, and so don't see why I should doubt what they've said.

They've been quite reasonably open about the limitations of their technology: It requires mapped-out roads, visible road markings, fair weather, et cetera. It requires a human driver at the wheel because there are some traffic situations it is not able to navigate; an example given was meeting an opposing car in a narrow road where the car was not sure if there was enough room to pass. In these situations, a voice announces politely that the human should resume control. If the human does not, one can only assume the car will come to a complete stop.

However, they have demonstrated the ability to autonomously navigate most types of traffic, including reacting to unexpected obstacles or pedestrians, dealing with panic stops ahead, negotiating with other drivers at a four-way stop, et cetera.

They claim hundreds of thousands of miles of fully-autonomous driving, with occasional human intervention being necessary in atypical circumstances. That seems like a completely plausible claim considering what they've shown us, and lacking any way in which they could profit from lying about it, I don't see any reason to take that claim at anything but face value.


The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.

We work hard on the systems we build and a simple lie like that could absolutely seem to be in the interest of the project at a time when this technology still freaks-out lawmakers.

Should we believe Google? Sure, probably.

Is it absurd to question their veracity? Are you kidding? Are you familiar with American capitalism?


> The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.

Yeah, I was thinking the same thing. If there's any chance that it's a lie, that's the only way it happens-- the guy at the wheel decided to take the fall without telling anyone else. Maybe he pushed the button by accident, the thing freaked out, and he decided no one needed to know. Possible.

But that still doesn't fly. Everything that happens to that car is measured and recorded for later analysis. There is a verifiable record of when it is under human control and when it isn't. Faking that record convincingly enough to cover up the only public accident in the history of the project is almost certainly beyond the capabilities of a single engineer who was just in a fender bender. (And for what it's worth, Google has claimed to have logs which prove that the car was in manual mode, which I assume are available to legislators.)

Human beings crash cars all the time. Autonomous vehicles crash-- well, there's actually no evidence that Google's self-driving car has ever crashed[0]. So if one of their self-driving cars crashes with a human behind the wheel, outside of the context of an autonomous test-drive, and he says at the scene that he was driving, and Google confirms that they have proof that he was driving, and considering that lying to the public about something provable is a really, really bad idea when you're trying to get a law passed...

If there were any evidence, any shred of inconsistency to their story, I'd be skeptical. But there isn't. There's just no reason to doubt them besides "Companies always lie." Or "Of course they'd say that." Yeah, I'd say that's absurd.

[0] In traffic, obviously. One can only presume it has hit many obstacles during development.


So, because in Alpha testing they had two issues that where later corrected your suggesting the hole thing is impossible? Will self driving cars be perfect, of course not. However, the threshold is simply will the first generation of self driving cars be hit by people more often than they hit people and I think that's vary obtainable as in 7-12 years from now they will be expensive but on public highways.

PS: #1 reason they will be adopted after the price drops? Ability for them to drive you home after a night of drinking.


I'm not suggesting anything, I'm just saying that extrapolating publicly available information about the performance of Google AI cars, does not give me any clue about how they would do in real traffic.

I don't think 'just as bad as some human drivers' will be good enough for driverless cars to ever become reality by the way. I'm amazed how so many people adhere to this strange way of reasoning: because some people are bad drivers and have accidents, it's ok for AI cars to do better than the average driver, but still worse than the responsible, safe driver? I'd say we'd better spend the time and energy trying to reduce the number of bad drivers, or limit the damage they can do when they screw up (for example by AI assistance as opposed to AI drivers). It's exactly arguments like this that put me off in the driverless-cars debate, just like the eternal 'but we have driverless planes for years and they work' argument I see a lot of people make. It only goes to show how love for technology clouds some peoples judgement (I hope I don't have to explain why comparing AI cars to planes doesn't make any sense on about any level imaginable?)

Anyway, judging from the votes I get, it was a bad idea to post my thoughts about driverless cars here, as I already feared. Just like the discussions I sometimes have with my colleagues who all also love technology, it goes nowhere. It's almost as if driverless cars have become a religion for some, anytime I try to put things into perspective I only get back a lot of negativity and non-sequitur arguments. I guess some people really want to believe in driverless cars.


The disconnect is even if I still drove myself 95% of the time I would still find it vary useful to be able to click a 'drive me' button when I know I am not at my peak. So, I would pay money for the feature. Therefore it's just a financial and legal question if I am allowed to buy one and because they would on net reduce accidents I think they will become widely available.

Now, I would probably not buy or afford gen #1, but it would only take a few years of adoption before I would consider it safe enough to use. And I know there are plenty of people willing to test the first 50 billion miles to 'work out the kinks'.

PS: Over 5 years there is millions of people would literally gain than 5k in direct utility from the I am drunk drive me home button even if that's the only thing it did. But, while less common and far more important is the it's 2am I am tired just get me home. Both of which pale in comparison to the I am still sleepy drive me to work button.


Don't worry - you are not the only one.

I don't see computers as being good enough at general driving to make it work. And, perhaps cynically, I don't see society in general liking the idea that accidents can be caused with nobody to punish. So, I imagine that even if the computer can drive it 100% perfectly 100% of the time, you'll need to have a human watching out. Which means you need a human driver. Which means it's going to be like driving, basically, only even MORE boring. Will you be able to catch up on your reading? No. Will you be legally conveyed in your late-night state of merry intoxication? No. Can you have a quick snooze? No. So... um, what's the point?

Unless a driverless cars is as good as taking the train, or going in an aeroplane, or - yes - simply being a passenger in a car that's being driven by the traditional human, frankly you might as well not bother.

Of course, it doesn't matter what I believe. There are lots of people working on this problem, whose IQs and imaginations clearly far exceed my own, and I don't mind to admit that I am already surprised by just how much progress has been made. So who knows?

Nevertheless, I think I will be proven right.

(On the plus side, even once the push for autonomous cars fails, we'll have a mind-boggling set of amazing driving assists.)


Totally agree with you and the original poster that the Google car is probably not driving as well as many people think.

If it was nearly near it,Google would sell that tech in any way, because that would make them probably the most valuable company in the world.

I can't imagine how an self driving car would, for instance, know how to drive into my garage (hint: it's not at all straight ahead & flat). How does the car know where it is allowed to park on a private parking? There are lots of huge challenges here.

A friend of mine knowning the automotive R&D much better than me also confirmed that view. We won't see them before years and years.

What I could well imagine soon, though, are for instance specially prepared highways that would allow to drive driverless on given sections. Probably increasing the overall throughput and so the CO2 emissions etc.


The reasons why we won't be having self-driving cars anytime soon is that car product development cycles are very long. Even if car makers were working with Google now to put this into cars, it could be five years before you'd see anything in dealerships.

And before that they probably need to become street-legal in major countries, convice manufacturers to trust Google etc.


> I don't see computers as being good enough at general driving to make it work.

Why not? It's a fairly mechanical operation, and 360 degree range-finders can do a far better job of detecting obstacles than rather limited human vision.


Driving itself is straightforward, but the inputs are messy and noisy. Computers aren't good at that.

Consider the wide variety of different road surfaces and cambers, the ever-changing appearances of obstacles according to conditions and the seasons, the limited accuracy of road maps, and the constant changing of the road network in minor ways. I expect a lot of driverless cars to be flummoxed by potholes, confused by temporary roadworks, utterly bamboozled by temporary diversions - and they won't be able to find my road in the first place. (I don't live in the middle of nowhere.)

As a simple matter - how will the car reliably know how fast to go? You can't rely on map data, as the legal limit can change, and nobody will think to tell the map people. You can't rely on the car spotting speed limit signs, as people can (and do) graffiti over them, or twist them so they're not straight any more. I don't think people will be so keen on "driverless" cars after they're held up by a whole train of them going 30mph on a 60mph limit road, or after they're in an accident with one going 60mph in a 20mph residential area.

Perhaps I'm being overly cautious, but I just don't think this will work very well. I can think of two outcomes. The first will never happen, because it involves simply letting the computers kill and main and cause accidents, under the assumption that the overall accident rate will be lower. But then who will be to blame for each accident? People need somebody to blame, so they can be taken to court and maybe sent to prison.

The second option is that you require a human to be in attendance all the time, ready to take over the controls when the computer gets confused. Which means it's not driverless. Which makes the whole exercise a totally pointless waste of money. If you need a driver... well, it's not driverless. You might as well class it as an amazing high-tech set of astounding driving aids. That is probably what we'll end up with, I suspect.


I expect that self-driving will become a safety mechanism before it becomes a full driver replacement. It will probably keep track of cars around you as you're driving manually, and if you get close to a collision, it will take over and move you away.

As the data improves, the road information becomes better curated, and so on, I expect that it will become a driving aid, as you describe.

And I expect that within a couple of decades, fully driverless cars (with nobody behind the wheel) will become commonplace.


Perhaps an unpopular viewpoint ;)

Don't listen to me, if you disagree - we will discover the truth of the matter in the end, whether it will end up as I suggest or not.


Human drivers are terrible. Is it so hard to imagine computers doing significantly better? Why do computers need to be 100% safe when the humans they are replacing are nowhere near that?


I would really like to see this video because if it is true, then they are covering up their mistakes just to increase their credibility.


I read in an article recently that Kinze is planning on releasing a fully autonomous agriculture planting machine by the end of this year. The field is certainly not the highway, but it is still neat to see the technology is already starting to becoming available for purchase (assuming they are able to deliver on their estimates).


Modern agricultural planting and harvesting equipment is already GPS guided, this is more of an incremental step to full automation of something that was probably 80--90% there already.


Which is, realistically, the stage Google is at now. The car seems to be able to drive just fine until an outlier condition arises.

What makes the Kinze system interesting is that they must feel confident enough about the safety/obstacle avoidance features to ready it for mass consumption. Again, the complexity of monitoring a field is not to the level of the road, but it does show that a lot of progress is happening in the study.

It does show promise for Google to be able to pull it off on the road too.


Suppose there was a super HOV lane on your freeway where you were allowed to do 100mph in stead of 55mph and the distance between cars was 20ft instead of 300ft.

Would you buy one then ?

Would your local highway authority jump at the chance of getting 4x as many cars without building anymore infrastructure?


no, the local highway authority would not jump at the chance. They have little incentive to make roads more efficient.

There is incentive to make them safer. No one wants to be blamed for 25-car pileups. I think convincing officials that this could be done safely will be difficult.


I don't know where you're from, but around here highway dollars are spent expanding and rebuilding highways to increase capacity and throughput. Projects that focus on only safety -- removing left-lane exits and entrances, adding full-width paved shoulders, increasing grade, increasing curve radius, etc -- are rare-to-non-existant uses of scarce highway dollars.

States -- especially California -- have a serious interest in increased highway efficiency. The only thing about the original premise that I wonder is how much additional wear-and-tear the high speed lane would create, especially considering it would need to be kept in optimal condition to avoid deleterious safety affects at such high speeds.


Not much - wear and tear is strongly a function of vehicle weight, a lane with no big rigs doesn't suffer much.

And with enforced control over the vehicles you could make the lane narrower and even have a low kerb to stop people cutting in and out.

Popular in europe are guided bus ways. you put a low concrete kerb in a very narrow lane ie. 12 inches wider than the bus. There are small guide wheels that run against the kerb and servo the steering. So you can run a two way full size bus route in almost the same width as a single rail line - and at the end of the route it can pull out into normal traffic. This is just a software alternative.


They do have an incentive to avoid 4hour queues in people getting to work - at least when those queues mean companies start moving to other cities because of the traffic.

That's why cities are forever building new freeways, widening freeways, building bridges etc.

The cities don't get blamed for pileups in the same way that airports don't get blamed for plane crashes.


what is the incentive for highway planners? Isn't how many companies are in the area a little indirect of an incentive? How does it impact their individual wellbeing?

An indirect political incentive is a very weak incentive. There is more direct incentive to do nothing innovative.


I imagine the Volvo engineers have done a lot of work on radio link interference / cutting up etc - just because they don't cover every eventuality in the article, doesn't mean they haven't thought of it!


The concept isn't new: planes have been landed automatically based on radio waves of the ILS system. But compared to a landing procedure on a single runway in a controlled space with necessary ground equipment to aid in the process, car traffic is not so much static.

The automatic following system will bump into fundamentals and the close proximity of the participating cars will only magnify their effect.

Consider a ten car "train" (or convoy) following the first car automatically. Their speed is higher than usual and they drive within a few meters of each other, all carefully controlled by a computer. Then the first car hits a big elk (moose).

The weight of the animal itself is sufficient to cause a significant, sudden decrease in the speed of the car body. While the sensors on the second car do notice that the first car slowed down (due to the impact), it cannot possibly brake to a stop without hitting the first car. If the convoy is driving 100 km/h no car is going to stop in time if the forecar loses a significant percentage of its speed in a short flash.

Now, what happened between the first and the second car will continue to propagate far back in the convoy and the end result is a pile of ten cars mostly crushed into each other.

Maybe there are no elk on a highway. Make it someone who had learned his driving skills on a Russian highway (you must have seen the Youtube videos). Or someone's tire blows up and that car spins to the adjacent lane straight in front of the first car.

But there's a good reason to have a speed-dependent 2-4 second safety margin between cars, and robotic controls and computerised radio links aren't going to change the physics.


You don't even have to hit a sudden obstacle to get problems, trains of vehicles also add the problem of increased harmonics. Say the second car oscilates with +/-10cm from the desired length from the first car. How will this propagate to the last car if they all have the same controller, or if they all have different controllers? I know the uni here has been doing research about this exact problem in cooperation with volvo cars.


I agree with you that this may not be revolutionary, but with some relatively realistic numbers (1300 kg lead car, a giant 1500 kg moose, 80kph road speed, 6m following distance, 7m/s^2 max decel of following vehicles), it looks like you'd only have the first two or three cars collide, and probably not hard enough to cause life threatening injuries in the second car. I'd think that tuning the following distance to the road speed (and, possibly more morbidly, the mass of the lead car relative to the foreseeable obstacles, which I'm not entirely sure that Volvo wasn't tacitly doing in the test) would be adequate to come up with a safe following distance well shorter than 2 seconds, where fuel economy gains obviously kick in as well.

The practical reaction time for mid-cruise humans is pretty terrifyingly slow sometimes. Pair that with suboptimal panic behavior and I believe that there's some significant efficiency and maybe even safety gains to be made. Granted, there is a limit.


I heard about similar tests with 2 cars, one following another and they were driving around different countries - they had issues in Russia as someone would occasionally slip in between them breaking the link.

Google research is much more interesting since car is independent, but much more demanding on a road data.


The simple way to solve that is the same way human drivers are taught to solve this: by following at a safe distance. I was taught to follow at my own stopping distance, so that if the leading car were to hit a brick wall and stop immediately, I would still have time to stop myself.


Guess why the first car is a truck. Also Volvo is based in Sweden and have plenty of experience with elk. There's even two tests designed for elk[1].

[1] http://en.wikipedia.org/wiki/Moose_test


This is a great notion for long distance road trips. It would be pretty nice when cruising cross-country to drift in behind an 18-wheeler and then read a book.

I think we'll get fully autonomous vehicles before this type of thing becomes common, but it's pleasant to envision a future where cars on the freeway form little spontaneous communities, with lead cars broadcasting "hey I'm headed here, hitch a ride" and others falling in behind.

Yes, I know there are a million ways a griefer could cause problems. Yes, I know that driving a car on the freeway in the middle of nowhere is one of the easier scenarios for a completely self-driven car, and no lead is necessary. I still think that self-organized caravans is a cool notion.


I was little surprised at first about: "three cars behind the truck at an average separation of 6m". But this is probably an idea to lessen drag. It doesn't matter what powers our cars(gasoline, biofuel electricity), less is less, and we need all technology that can do it.

This is probably very important step towards self driving cars. Because some kind of commercial product will bring loads of money to the field, even if it's very basic.


The 6-metre distance between cars is probably to strongly discourage other drivers from attempting to cut in, which would banjax their laser finders and other stuff.

With the lead truck sending instructions, this is a client-server setup, compared to google's independent peer-to-peer approach. It's not self-driving, it's remote-control of a train of cars behind you. The volvo experiment seems much less ambitious than google's, and might be more successful in the narrow case of long-distance motorway travel. But I don't see how to generalise it to everyday driving.

Add an ad-hoc mesh network to google's cars and anything could become possible.


question, since it wasn't answered in the article (I looked). do these cars tether to the truck electronically? I.e. is there a virtual leash/tow going on? Or do they use the truck as a guidepost? could they have driven without the truck? Did other cars cut in between them at any point?

Fascinating this whole move into self-driving vehicles though.


"Wirelessly streamed data from the lead vehicle tells each car when to accelerate, break and turn, all in real-world traffic conditions."

Seems like a wireless leash.


FTA

> On-board cameras, radar and laser tracking allow each vehicle to monitor the one in front. Wirelessly streamed data from the lead vehicle tells each car when to accelerate, break and turn, all in real-world traffic conditions.


Is it legal to let cars self-drive in Spain?


This: http://en.wikipedia.org/wiki/EUREKA_Prometheus_Project seems to indicate that trials/experiments have been on-going for many years, in various European countries. Perhaps it's a EU thing, so that Spain is included by default.


When does a car become self-driving? I imagine that distance control is legal (i.e. cruise control where the car will brake and accelerate to maintain distance behind the car in front, up to a maximum speed), and this is a further extension with steering. There's still a person driving the truck at the front.


though lots of newer cars also have lane assist (so if you're drifting our of your lane without signalling, it will steer you back in).

I think it's becoming clearer that there's no strict dividing line between computer-driven and human-driven.

Rather, we'll slowly move to more and more being done by computer, and less and less done by the human (in this example- the computer taking over the motorway driving, and then handing over to you when its time to return to the normal roads).

This was also covered in the article about the Google car last week, which hands over to humans when it gets into a sticky/narrow situation.


"this is a further extension with steering"

Park assist is fairly common these days - which will steer your car into a parking space.


They have guys in the drivers seat like Google did I suppose. It's legal so long as someone has control.


They probably had drivers at the wheel to get around that.


There's actually a proven and safe method to accomplish the same task, that has been occurring for centuries now. It works by having passengers wait on a platform, step onto a carriage then take a seat while the driver does all the work. Yeah, it's called a railway. :)

You can even take your car onto such a system (motor rail). While it's currently an expensive niche, perhaps prices would come down with greater utilisation. Indeed, if world oil prices should increase enough then it may well be cheaper load a bunch of vehicles onto a train and tow them along for long distances rather than drive them individually on roads.


It may not be that interesting for cars, but it is for trucks. A friend of mine is working on a similar system for trucks. The point there is that the group of trucks would actually behave as a train (and need only one driver, or at least the others could rest). The trucks this way could also drive closer to each other sparing on gas.

On the what happens when the leader car hits an elk scenario - the leader vehicle could (and probably does) transmit its own parameters (speed, acceleration, etc.).


It's not a self-driving car. Please change the title.


And how did they avoided the reckless spanish Guardia Civil? They are thirsty for anyone's money, and they won't be stopped by this "self-driving" crap. MULTA!


When the highway patrol cars came past the Volvos drove underneath the trailers of the big rigs and another rig overtook the first and hid them from view.

Just press the smokey-and-the-bandit button on the self drive console.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: