
Self-driving Volvos cover 200km of busy Spanish motorway - tomgallard
http://www.reghardware.com/2012/05/28/volvo_tests_project_sartre_on_public_roads/
======
electrograv
Self driving? Only if such a word can be applied to a laser beacon and
wireless based follow-the-leader link. What happens if your radio link
encounters interference? What happens if another car cuts in front of a robot
car?

Obviously, all these self driven car stories are still ages behind google's AI
car tech. I suppose it's unfair that google's car has spoiled the relative
excitement of all these other stories, because they seem extraordinarily
boring in comparison.

~~~
w0utert
Genuinely curious: what is it that makes so many people have such high regards
for Google's self-driving AI? I agree that it's a whole different ballgame
compared to just tracking a lead vehicle on the motorway, but honestly, all I
have seen from the Google self-driving cars have basically been hyped-up
success stories and demo video's in extremely controlled and scripted
environments. Nobody ever shows that video from less than a year ago where a
Google AI car crashes into another car on their test track.

Yes, it appears the Google cars can drive around town 'autonomously' to some
degree, yes, Google has facts to back up the number of miles travelled
'autonomously', but you never, ever get any facts from Google about how many
times the human driver still had to intervene, how many times they had the
software or hardware fail, how many situations the cars encountered that they
could not handle.

To me, self-driving car achievements are still mostly PR. A car that can
really, actually drive autonomously safely, anywhere, anytime, without any
need for the drive to pay any attention at all, without requiring adapting all
infrastructure, it's still a pipe dream and I think it will always be. Road
infrastructure and traffic (especially city traffic with bikes and
pedestrians) is simply too irregular and unpredictable to handle reliably by
any form of AI known to man.

Personally, I think humanity will find a better way of transportation before
the self-driving car will ever become a reality, as in: I can walk into a car
dealership, buy a self-driving car, and have it drive me home safely while I
read a newspaper, no matter where I live.

~~~
driax
I have not heard of a Google AI Car crashing into another car. I have heard
that one of the Google AI Cars was involved in a small crash (the other car
crashed into the Google Car) while it was driven manually just outside
Google's headquarters. Is that the incident you are referring to?

~~~
w0utert
No, I didn't even know about that crash, reading the reports it apparently
involved 4 cars, and the AI Prius running into the back of the car in front of
it. Obviously, Google declared afterwards that it was driven by a human at
that time, and that the crash was due to human error. What else would you
expect them to do? There's no way to verify Google's statement, and why would
anyone want to anyway, blaming it on one of the Google employees that are
still required to be in the Google vehicles works out much better for every
party involved in such an incident, and for Google in particular.

The incident I was referring to was a video of a Google AI car doing laps on a
small test track that they set up on a parking lot. The car basically missed a
turn and left the track, hit the brakes and skid into a car parked next to the
track. Somehow (coincidentally?) the video is nowhere to be found using
Google, but I remember watching it like it was yesterday. You could see 2
Google employees chatting a bit next to the track until the car crashed in the
background and they got pretty upset.

Just to make myself clear: I'm not trying to talk down the great achievements
Google has made with their driverless cars, I'm just a little sceptic about
how far they actually are if you strip down all the PR coming from Google.

~~~
Cushman
> There's no way to verify Google's statement

This seems absurdly paranoid. Why is it, on the face of things, more likely
that the LIDAR-packing robot which is scanning 360 degrees hundreds of times
per second got into a fender-bender than that the human engineer who sometimes
has to drive that same car did? What reason do you think Google has to lie
about this to the public-- considering that if it were the result of a
software bug, and that bug made it to production, and somebody died, that
little white lie could easily be worth millions of dollars of liability and
you better believe that Google PR knows it?

> Google AI car doing laps

And this is even sillier. You use a track precisely because you are not
confident that the car will not go off the road, which is very likely to
happen while you are building a robotic car. Allowing the car into traffic
indicates that they _are_ confident.

This project is trying to alter the future of transportation. The self-driving
car could wind up being the most valuable property of Google's. The
technological challenges were difficult; the legal and political ones will be
Sisyphean. They must make their case to lawyers and legislators solidly in the
pocket of the industries they plan to destroy.

Why on Earth do you think the fickle opinion of the tech-press-consuming
public matters to them _at all_?

~~~
w0utert
> This seems absurdly paranoid. Why is it, on the face of things, more likely
> that the LIDAR-packing robot which is scanning 360 degrees hundreds of times
> per second got into a fender-bender than that the human engineer who
> sometimes has to drive that same car did?

For all I know there could be a million other reasons why it crashed, none of
them related to the hardware. Maybe it was a combination of a software problem
and failure of the human driver to intervene properly. Maybe the sensors
picked up something that confused the software thinking the car ahead was
still moving. Maybe there was some kind of hardware failure. Maybe the AI
simply never had to do the kind of unexpected emergency stop that caused this
incident and the minimum stopping distance wasn't programmed properly for the
road conditions at the time of the incident. Or, maybe, it actually was fully
operated by a human. Neither of us knows for sure.

> What reason do you think Google has to lie about this to the public--
> considering that if it were the result of a software bug, and that bug made
> it to production, and somebody died, that little white lie could easily be
> worth millions of dollars of liability and you better believe that Google PR
> knows it?

You don't really have to ask that question do you? Google has already spent
millions upon millions on this technology, and probably had to pull out all
the stops to lobby for the legislation changes they need for their tests.
Incidents like this could instantly kill their ambitions and severely hurt the
credibility of this technology. There basically is no risk blaming the driver,
who was, in fact, in the car and behind the wheel when it happened. Current
legislation actually demands there is a human driver behind the wheel for
exactly this purpose: to be able to assign responsibility for what the car
does to someone in case of an accident. Legally, the 'driver' likely even
_was_ accountable for the incident, even if he wasn't driving the car at all.

Nobody is served by saying it was the computer: not the owners of the other
cars, not law enforcement, not the government that allowed these cars to drive
around town, and definitely not Google.

> And this is even sillier. You use a track precisely because you are not
> confident that the car will not go off the road, which is very likely to
> happen while you are building a robotic car.

I brought the other incident up just to make the point that you don't hear
anything when the Google cars fail, just success stories that don't include
any details about failures or incidents. You don't assume the Google cars are
perfect yet, right? So if Google is so open and honest about everything, like
you seem to assume, why don't they tell us how often the cars require human
intervention, or what kind of situations are still a problem for the AI?

> Allowing the car into traffic indicates that they are confident.

You realize that none of these cars actually goes on the road without a human
driver ready to step in when it fails, right? And that all the routes the cars
drive are carefully selected and likely full of pre-programmed and scripted
details?

This not to say the technology is worthless just because Google is still
learning, but it's statements like yours that nicely show what's so strange
about this driverless cars discussion. Just because Google is confident to
have AI cars with people behind the wheel driving through town, you are
confident that you have any insight at all about how those cars would actually
do without a human backup driver behind the wheel. Unless you are a Google
employee working on these cars, you know nothing more than what Google wants
you to know, and that most likely will not include all the possible points of
failure of this technology.

~~~
Cushman
I don't expect Google to be completely open, I just don't expect them to lie
for no reason, and so don't see why I should doubt what they've said.

They've been quite reasonably open about the limitations of their technology:
It requires mapped-out roads, visible road markings, fair weather, et cetera.
It requires a human driver at the wheel because there are some traffic
situations it is not able to navigate; an example given was meeting an
opposing car in a narrow road where the car was not sure if there was enough
room to pass. In these situations, a voice announces politely that the human
should resume control. If the human does not, one can only assume the car will
come to a complete stop.

However, they have demonstrated the ability to autonomously navigate most
types of traffic, including reacting to unexpected obstacles or pedestrians,
dealing with panic stops ahead, negotiating with other drivers at a four-way
stop, et cetera.

They claim hundreds of thousands of miles of fully-autonomous driving, with
occasional human intervention being necessary in atypical circumstances. That
seems like a completely plausible claim considering what they've shown us, and
lacking any way in which they could profit from lying about it, I don't see
any reason to take that claim at anything but face value.

~~~
encoderer
The idea that it could've been a lie -- that the AI was engaged during the
crash -- doesn't have to be a conspiracy involving PR. I think if it was in
fact a lie it would be much more likely to originate from the engineer in the
car.

We work hard on the systems we build and a simple lie like that could
absolutely seem to be in the interest of the project at a time when this
technology still freaks-out lawmakers.

Should we believe Google? Sure, probably.

Is it absurd to question their veracity? Are you kidding? Are you familiar
with American capitalism?

~~~
Cushman
> The idea that it could've been a lie -- that the AI was engaged during the
> crash -- doesn't have to be a conspiracy involving PR. I think if it was in
> fact a lie it would be much more likely to originate from the engineer in
> the car.

Yeah, I was thinking the same thing. If there's any chance that it's a lie,
that's the only way it happens-- the guy at the wheel decided to take the fall
without telling _anyone_ else. Maybe he pushed the button by accident, the
thing freaked out, and he decided no one needed to know. Possible.

But that still doesn't fly. _Everything_ that happens to that car is measured
and recorded for later analysis. There is a verifiable record of when it is
under human control and when it isn't. Faking that record convincingly enough
to cover up the only public accident in the history of the project is almost
certainly beyond the capabilities of a single engineer who was just in a
fender bender. (And for what it's worth, Google has claimed to have logs which
prove that the car was in manual mode, which I assume are available to
legislators.)

Human beings crash cars all the time. Autonomous vehicles crash-- well,
there's actually no evidence that Google's self-driving car has ever
crashed[0]. So if one of their self-driving cars crashes with a human behind
the wheel, outside of the context of an autonomous test-drive, and he says at
the scene that he was driving, and Google confirms that they have proof that
he was driving, and considering that lying to the public about something
provable is a really, really bad idea when you're trying to get a law
passed...

If there were _any_ evidence, any _shred_ of inconsistency to their story, I'd
be skeptical. But there isn't. There's just no reason to doubt them besides
"Companies always lie." Or "Of course they'd say that." Yeah, I'd say that's
absurd.

[0] In traffic, obviously. One can only presume it has hit many obstacles
during development.

------
yason
The concept isn't new: planes have been landed automatically based on radio
waves of the ILS system. But compared to a landing procedure on a single
runway in a controlled space with necessary ground equipment to aid in the
process, car traffic is not so much static.

The automatic following system will bump into fundamentals and the close
proximity of the participating cars will only magnify their effect.

Consider a ten car "train" (or convoy) following the first car automatically.
Their speed is higher than usual and they drive within a few meters of each
other, all carefully controlled by a computer. Then the first car hits a big
elk (moose).

The weight of the animal itself is sufficient to cause a significant, sudden
decrease in the speed of the car body. While the sensors on the second car do
notice that the first car slowed down (due to the impact), it cannot possibly
brake to a stop without hitting the first car. If the convoy is driving 100
km/h no car is going to stop in time if the forecar loses a significant
percentage of its speed in a short flash.

Now, what happened between the first and the second car will continue to
propagate far back in the convoy and the end result is a pile of ten cars
mostly crushed into each other.

Maybe there are no elk on a highway. Make it someone who had learned his
driving skills on a Russian highway (you must have seen the Youtube videos).
Or someone's tire blows up and that car spins to the adjacent lane straight in
front of the first car.

But there's a good reason to have a speed-dependent 2-4 second safety margin
between cars, and robotic controls and computerised radio links aren't going
to change the physics.

~~~
Too
You don't even have to hit a sudden obstacle to get problems, trains of
vehicles also add the problem of increased harmonics. Say the second car
oscilates with +/-10cm from the desired length from the first car. How will
this propagate to the last car if they all have the same controller, or if
they all have different controllers? I know the uni here has been doing
research about this exact problem in cooperation with volvo cars.

------
finnh
This is a great notion for long distance road trips. It would be pretty nice
when cruising cross-country to drift in behind an 18-wheeler and then read a
book.

I think we'll get fully autonomous vehicles before this type of thing becomes
common, but it's pleasant to envision a future where cars on the freeway form
little spontaneous communities, with lead cars broadcasting "hey I'm headed
here, hitch a ride" and others falling in behind.

Yes, I know there are a million ways a griefer could cause problems. Yes, I
know that driving a car on the freeway in the middle of nowhere is one of the
easier scenarios for a completely self-driven car, and no lead is necessary. I
still think that self-organized caravans is a cool notion.

~~~
nosse
I was little surprised at first about: "three cars behind the truck at an
average separation of 6m". But this is probably an idea to lessen drag. It
doesn't matter what powers our cars(gasoline, biofuel electricity), less is
less, and we need all technology that can do it.

This is probably very important step towards self driving cars. Because some
kind of commercial product will bring loads of money to the field, even if
it's very basic.

------
conanite
The 6-metre distance between cars is probably to strongly discourage other
drivers from attempting to cut in, which would banjax their laser finders and
other stuff.

With the lead truck sending instructions, this is a client-server setup,
compared to google's independent peer-to-peer approach. It's not self-driving,
it's remote-control of a train of cars behind you. The volvo experiment seems
much less ambitious than google's, and might be more successful in the narrow
case of long-distance motorway travel. But I don't see how to generalise it to
everyday driving.

Add an ad-hoc mesh network to google's cars and anything could become
possible.

------
brianbreslin
question, since it wasn't answered in the article (I looked). do these cars
tether to the truck electronically? I.e. is there a virtual leash/tow going
on? Or do they use the truck as a guidepost? could they have driven without
the truck? Did other cars cut in between them at any point?

Fascinating this whole move into self-driving vehicles though.

~~~
ralfd
"Wirelessly streamed data from the lead vehicle tells each car when to
accelerate, break and turn, all in real-world traffic conditions."

Seems like a wireless leash.

------
tropin
Is it legal to let cars self-drive in Spain?

~~~
NLips
When does a car become self-driving? I imagine that distance control is legal
(i.e. cruise control where the car will brake and accelerate to maintain
distance behind the car in front, up to a maximum speed), and this is a
further extension with steering. There's still a person driving the truck at
the front.

~~~
tomgallard
though lots of newer cars also have lane assist (so if you're drifting our of
your lane without signalling, it will steer you back in).

I think it's becoming clearer that there's no strict dividing line between
computer-driven and human-driven.

Rather, we'll slowly move to more and more being done by computer, and less
and less done by the human (in this example- the computer taking over the
motorway driving, and then handing over to you when its time to return to the
normal roads).

This was also covered in the article about the Google car last week, which
hands over to humans when it gets into a sticky/narrow situation.

------
jeza
There's actually a proven and safe method to accomplish the same task, that
has been occurring for centuries now. It works by having passengers wait on a
platform, step onto a carriage then take a seat while the driver does all the
work. Yeah, it's called a railway. :)

You can even take your car onto such a system (motor rail). While it's
currently an expensive niche, perhaps prices would come down with greater
utilisation. Indeed, if world oil prices should increase enough then it may
well be cheaper load a bunch of vehicles onto a train and tow them along for
long distances rather than drive them individually on roads.

------
atleta
It may not be that interesting for cars, but it is for trucks. A friend of
mine is working on a similar system for trucks. The point there is that the
group of trucks would actually behave as a train (and need only one driver, or
at least the others could rest). The trucks this way could also drive closer
to each other sparing on gas.

On the what happens when the leader car hits an elk scenario - the leader
vehicle could (and probably does) transmit its own parameters (speed,
acceleration, etc.).

------
ccarpenterg
It's not a self-driving car. Please change the title.

------
kbronson
And how did they avoided the reckless spanish Guardia Civil? They are thirsty
for anyone's money, and they won't be stopped by this "self-driving" crap.
MULTA!

~~~
excuse-me
When the highway patrol cars came past the Volvos drove underneath the
trailers of the big rigs and another rig overtook the first and hid them from
view.

Just press the smokey-and-the-bandit button on the self drive console.

