
Nissan's Path to Self-Driving Cars? Humans in Call Centers - hliyan
https://www.wired.com/2017/01/nissans-self-driving-teleoperation/
======
TillE
This is the obvious stopgap measure so you can get to market when your self-
driving software is very very good but not quite perfect. You use the data
from these situations to improve the software, requiring fewer and fewer
manual interventions until you only need human assistance in truly exceptional
situations like physical damage to the sensors.

I wouldn't be surprised to see an almost-entirely-self-driving car service in
certain areas in the nearish future. As long as it's 100% safe and only
requires assistance rarely, it makes sense to start building out the
infrastructure sooner rather than later.

~~~
stephengillie
Level 4 self-driving seems the second-least desirable after Level 0, as it's
entirely passive but requires full active attention. It's a ripe situation for
distraction, something Tesla cars attempt to combat with the "wheel-squeeze".

From a cynical perspective, it's easy to see a future where we constantly
approach, but never quite reach, full Level 5 autonomy - or at least not with
existing technology. Maybe Level 5 will prove to be an NP problem. Marketing
teams are likely going to promote terms like "Level 4.1 self-driving" or
"Level 4.5 self-driving" or "Level 4.9 self-driving", as edge-cases are slowly
resolved.

~~~
21
> Maybe Level 5 will prove to be an NP problem.

Humans are able to "Level 5" driving, so...

~~~
twic
For a subset of cases, but not all of them, as traffic safety statistics show.

~~~
Robotbeat
Yup. Humans are allowed to make mistakes and fail fatally. If self-driving
cars make a single fatal mistake, then watch out for negative press coverage
for weeks. Why? Because every failure can be traced back to a root cause.
Every root cause (other than something like another driver literally trying to
kill you) looks like an incompetent screw-up after peeling back all the
layers. In principle, self-driving cars are basically perfectly safe (as a
system). So every failure causes righteous fingerpointing.

...which is also partly the motivation to improve.

I expect self-driving cars long-term to be a lot like air travel. It'll have a
reputation for being super unsafe, with lots of people paranoid of it, for
decades after it has far eclipsed the safety of regular human car driving. The
paranoid hyper-focus and ridiculous press coverage is also what will enable it
to become essentially perfect (like passenger air travel is today in the
developed world).

~~~
alkonaut
> Yup. Humans are allowed to make mistakes and fail fatally.

It's because in almoost all cases, one fatal mistake results in one fatal
accident. And in most cases, the failing party is part of the accident -
giving us reason enough to believe people to what they can to avoid it.

With AI, one sloppy mistake can be the root cause of 100 fatalities, with no
risk to the one who made the mistake. That's part of the reason we don't
accept it.

~~~
Robotbeat
I think you’re right that human drivers makes it a lot easier to blame the
victim (even though it’s really the whole system that made it possible for the
human to make such a mistake that is responsible... the system isn’t held to
account because it’s just so easy to blame the human for making the proximate
fatal error).

------
dogruck
This idea will never work. Putting aside all other reasons — the liability
equation doesn’t make sense.

Imagine being a lowly “meat bag” in the call center. You’re presented with an
endless stream of situations so difficult that the robot can’t solve.
Eventually, you make a mistake, and people die. Checkmate.

~~~
burger_moon
Sounds like 9-1-1 operators. I'm sure there's a way to make it work, but it
won't happen over night and unfortunately most processes are the result of
learning from mistakes which in this context are potentially life threatening.

~~~
dogruck
Seems different than 9-1-1 operators, who are responsible for passing
information to first responders. These autonomous car operators would make
decisions such as “go ahead and ignore the construction cones.” They won’t
simply say, “okay, I’ll send a police crew out to the intersection to triage
the confusing construction cones.”

------
ceejayoz
I think I find the prospect of a random call center employee being able to
commandeer my vehicle more concerning than AI screwups.

~~~
jaclaz
>I think I find the prospect of a random call center employee being able to
commandeer my vehicle more concerning than AI screwups.

Whether the commands are given by an AI or a call center employee, I would
additionally highlight the "remote" part (please read as "over a possiby flaky
GSM or satellite connection that can have an hiccup at _any_ time).

~~~
21
The way the article (and pictures) describes it, the car will receive a full
instruction package before starting to execute it. And all regular systems
(collision avoidance, ...) will presumably still be active.

~~~
jaclaz
>The way the article (and pictures) describes it, the car will receive a full
instruction package before starting to execute it. And all regular systems
(collision avoidance, ...) will presumably still be active

Will receive the "full instruction package" IF the connection works at the
needed time or however before it is needed.

The approach could be very good for whatever you have time for AND that is not
"vital", just as an example to correct a navigation plan, but for something in
an "emergency" the reliability of the connection at the right time needs to be
taken into account.

------
trebligdivad
It makes a lot of sense to me, we're way off an AI being able to cope with
random construction sites. But then it does have to be able to cope with out
comms, for example there was a recent self-driving truck case where the truck
came to a stop because the remote office had just lost power - that would be a
mess if there were thousands on the streets.

~~~
username223
> we're way off an AI being able to cope with random construction sites.

And this is why level 5 autonomous cars are basically strong AI: current roads
assume that a human controls every vehicle. When some guys want to unload a
truck full of bricks, they park, turn on the flashers, and wave traffic
through in alternating directions. Humans understand this, but good luck
teaching a robot. The Nissan solution of having the robot freak out for 30
seconds before a remote operator moves it out of the way is better than
nothing, but hardly great.

~~~
hliyan
This just gave me a thought. What if the onboard self-driving system doesn't
really need the teleoperator to take over completely, but to select from a set
of decisions in difficult situations? The teleoperator doesn't need to watch
the footage continuously, but only needs to respond quickly when the onboard
system generates an alert.

~~~
notahacker
Only a subset of decisions (can I proceed as normal) are easily dealt with as
yes/no answers. "How do I proceed through this environment which is playing
havoc with my sensors?" is less straightforward.

Of course, there's a big class of problems which can't be dealt with by remote
takeover at all, such as "should I hit this unclassified small obstruction
that's entered the roadway or brake/swerve in a manner almost certain to
result in a collision?"

------
stephengillie
Having rented a 2017 Altima last summer, and finding it to have only non-
adaptive cruise control, I'm somewhat surprised to see an article on Nissan
cars having self-driving capabilities. From another article[0], Adaptive
Cruise, Lane Assist, and Automated Braking (ACLAAB) are set to launch in the
2019 Leaf. But the article makes it clear that these in no way make the car
autonomous, and the driver must constantly be ready to engage. Instead of
using a "wheel-squeeze" like Tesla, Nissan's car uses steering wheel torque
data to tell if you're holding the wheel.

Combining the two, it sounds like their system is ACLAAB that comes to a full
stop and "phones home" when it gets confused. But how often will it get
confused? Will it be better than the self-check bag sensor detecting your
reusable bag correctly? And how long will it take someone in their call center
to "pick up their phone call" and remotely drive the car?

This might be an acceptable situation for driving elderly people to the
grocery store. But it's going to cause a great deal of frustration for other
drivers - having to deal with cars that stop at random spots on the street or
highway - and sit there for up to a minute as someone remotes in to drive.

[0] [http://www.autoguide.com/auto-news/2017/07/nissan-
propilot-a...](http://www.autoguide.com/auto-news/2017/07/nissan-propilot-
assist-takes-adaptive-cruise-control-to-the-next-level.html)

~~~
kalleboo
I thought Tesla also used wheel torque data?
[https://teslamotorsclub.com/tmc/threads/how-does-
autopilot-s...](https://teslamotorsclub.com/tmc/threads/how-does-autopilot-
sense-hands-on-wheel.75905/)

> _One method is to detect torque on the steering wheel, hinted Musk. “So we
> might issue a visual and auditory alert to make sure you’re okay.”_

------
YeGoblynQueenne
I'm trying to articulate how pointless this sounds. First, you create some AI
system that's supposed to help take the human out of the loop in order to
improve safety (allegedly). You realise there are still situations your AI
can't handle. You decide to give control to a human outside the car, in a call
centre. So now the human brain is back in control, except it's not the human
in the car (presumably, there is a human in the car?), but someone completely
different.

So you have to spend the resources to develop self-driving technology _and_ to
maintain a stable of remote drivers. And all that- for what? Human brains are
still in control of cars and car AI is still dumb as bricks often enough to
make them less safe than humans. If you had problems with humans forced to
jump in and take control at a moment's notice where the car failed, before,
now you're making these problems even worse by having someone take control
_remotely_ so that they're even less aware of the unfolding situation than the
human in the car.

It's just ...pointless. It makes even less sense than a car that drives itself
safely only when you're paying attention (hey, I can kill myself when I'm not
paying attention just fine on my own, I don't need an AI car to do that).

~~~
josephpmay
The point is that you would only need this .01% of the time, and in everyday
driving, the autonomous vehicle would be safer than a human

~~~
YeGoblynQueenne
Where is that "0.01% of the time" coming from?

And what about the claim that the rest of the time the self-driving car is
safer than a human? How do we know that self-driving cars are safer than cars
driven by a human?

------
scottmcdot
I might be missing something, but why wouldn't the passengers step in and
override the car? Is this assuming the car is unmanned?

------
tcmb
The current problem, as mentioned in the article, is that signage for
exceptional situations is made good enough for human drivers, but not for
self-driving cars.

When self-driving cars become more prevalent, couldn't we have standardized
signage that allows the cars to deal with exceptional road conditions like the
example in the article?

Agreeing on industry-wide standards (insert relevant xkcd link here) can make
such situations solvable for the machine, by employing well-recognizable clues
that signify, for example, 'ignore red light and drive on opposite side of the
street'.

------
Aspyre
Why was this article resurrected from Jan 2017? What makes this relevant now?

