Hacker News new | past | comments | ask | show | jobs | submit login
ALVINN, an autonomous land vehicle in a neural network (1989) [pdf] (cmu.edu)
46 points by scvalencia on March 29, 2017 | hide | past | favorite | 16 comments



I've been in that autonomous vehicle, although it wasn't moving at the time. Their road-follower worked only in very good circumstances.

The amazing thing is how much progress there's been in CPU power. That thing took three racks of computers, and a crew of 5, to drive 5 MPH. They had a 3D LIDAR, though, with a line scanner and a tilting mirror. That was better than most of the LIDAR units until recently.


Animats, you've been at this longer than anybody on HN from what I can see, question for you: What in your estimate is going to be the failure rate of these self-driving set-ups and what kind of failsafe measures to gracefully shut-down are still on the table once a major sensor fails?

I'm wondering about this because I miss this element in most of the discussions around the theme, a human having a heart attack or a stroke behind the wheel is almost certainly going to cause an accident (one sided at best, multiple vehicles at worst). But I don't quite see how that translates into sensor failure (or even CPU failure) in a self driving car.


Absent mandatory standards, that's going to vary with the manufacturer. Volvo says they have redundant computers, sensors, and actuators. They've probably tested fall-over to backup. Google has some sensor redundancy, but it's not clear how well they protect against component failure.

If I were designing this, I'd have three independent control systems. System A does regular automatic driving. System B monitors system A, but it's only as smart as a stock anti-collision system braking system is now. No AI. It has info from the radar and can slam on the brakes. Maybe steer a bit to get to the side of the road and stop. System C is purely a watchdog - if it detects that systems A and B have failed, it cuts propulsion and applies the brakes. If system C takes over, you may hit something, but won't run away out of control.

I'd ride in a Volvo or Google self-driving car. Uber, Otto, or Cruise, no.


So, something along the lines of the Shuttle computer design. That makes sense. Thank you for answering. I'm wondering about this because I expect the MTBF demon to kick in when the numbers go up. Right now the numbers are so small that equipment failure is negligible, but as soon as the numbers are high enough that these failures become more common the better architectures will start to stand out.


You will always have a redudancy of sensors in case they fail. The front looking camera usually does have a complimentary lidar and most cars are even equipped with both camera and radar in case a sensor goes AWOL. This will be the case is selfdriving cars as well. I think they will have even more redudancy then we see with ADAS cars today.


Every sentence you wrote contains the word 'will', so they are 'forward looking statements' and evidence free. I understand that we'd like this to be so but there is no guarantee that it will be the case and a single, non-redundant sensor going down could have immediate impact on the cars ability to get itself off the road in a safe way.

This wasn't about opinions but about the state-of-the-art and the facts as we know them today. What will happen in the future is anybody's guess, my guess is that before long there will be a couple of major crashes involving self driving cars resulting in mandatory redundant systems and redundant (quorum based, see Animats' comment) processing units. Otherwise self driving cars will end up with a bad reputation, even if the chances of failure with any individual vehicle are small.

Statistically speaking the number of components and the their internal complexity are good indicators of how frequently you can expect such a system to fail. Self driving cars are sufficiently complex that these are legitimate sources of worry and simply waving those worries away with what 'will' happen does nothing for me, it is the how that interests me.


Ofc they will fail at some point, there is nothing magical about a self-driving car. What I meant with the previous post is that it could be fail-operational see for example the volvo drive-me.


Whether having multiple sensors guarantees redundancy depends on what they are hooked up to, and how.

If the software they are connected to doesn't know what to do when the camera fails or starts sending bogus data or even to detect that it happens you still may have a lidar, radar, etc. but it doesn't make the system fail-safe.


Nope doesn't make it fail-safe at all and I don't think that is very hard to accomplish. What can be accomplished is fail-operational, if one sensors goes down using other sensors the car can navigate it-self to safety.


Now easily replicable (in smaller form) with a smartphone and an Arduino:

http://blog.davidsingleton.org/nnrccar/

Interesting story behind the creation of that: David Singleton was one of my "classmates" in Andrew Ng's ML Class in the Fall of 2011 - I was both amazed and pleased that he managed to create this demonstration after we had played around with building a neural network using Octave, and after seeing the many video clips (about ALVINN) that were a part of the class that showed how it worked.

That class ultimately led to the founding of Coursera - the class the Sebastian Thrun and Peter Norvig were running at the same time (AI Class) let to Udacity. Coursera still offers that original ML Class (called something else now).

As a part of Udacity's Self-Driving Car Engineer Nanodegree, one of the projects in the first term was to get a simulated car to drive around a track. Many (if not most) of us chose to implement NVidia's End-to-End CNN approach (there were also a few that repurposed ImageNet) - either in whole or in a modified form. Ultimately, with the right amount of training and datasets, the car would drive well around the track.

I find it amazing how far we've come in the short amount of time between that ML Class course and today, in the development of machine learning. I just hope we don't fall into another winter that'll take a decade or more to pull back out of...


This ALVINN promotional video is a good demonstration of the (1989) technology: https://www.youtube.com/watch?v=ilP4aPDTBPE


Basically the same core design as Tesla autopilot and other autonomous software today... but the hardware to make a useful finished product just weren't available yet. Way ahead of it's time.


It's not just a matter of hardware. People didn't really know how to train neural networks more than 1-2 layers deep at the time. This is also before convolutional neural networks were invented by LeCun.

This network is literally 30x32 pixels => two NN layers => steering decisions. No convolution, no semantic segmentation. It's a very cool technical demonstration, but we've come a long way since, on every front.


...and apparently it still took a good amount of time to train.

If you dig enough thru the various papers and such written about ALVIIN, you'll find links to older projects, and a couple of "follow on" ones as well.

One of the earlier projects that (I believe) led to ALVINN was the CMU Terragator - this was a custom 6-wheel chassis about the size of a large quad-cycle that was designed to navigate around the CMU campus. It was basically a testing platform for autonomous vehicle technologies. IIRC, this was around 1984 or so.

After the Terragator, but prior to ALVINN (and ultimately led to the funding for ALVIIN) there was a project to develop a radio-controlled vehicle that could navigate through an unstructured environment (a cluttered alleyway, IIRC) called "DAVE".

These projects and more are all referenced in the NVidia End-to-End CNN paper.


Yes, agreed. Software had also not advanced sufficiently.


Another project, with comparable design and objectives was EUREKA PROMETHEUS Project (https://en.wikipedia.org/wiki/Eureka_Prometheus_Project), which lead to "VaMP": https://en.wikipedia.org/wiki/VaMP

From the article: In 1994, the VaMP and its twin VITA-2 were stars of the final international presentation of the PROMETHEUS project in October 1994 on Autoroute 1 near the Charles-de-Gaulle airport in Paris. With a safety driver and guests on board, the twins drove more than 1000 km in normal traffic on the three-lane highway at speeds up to 130 km/h. They demonstrated lane changes left and right, autonomously passing other cars after maneuver approval by the safety driver.

One year later, the autonomous Mercedes-Benz drove more than 1,000 miles (2,000 km) from Munich to Copenhagen and back in traffic at up to 180 km/h, again planning and executing maneuvers to pass other cars with safety driver approval. However, only in a few critical situations (such as unmodeled construction areas) a safety driver took over completely. Again, active computer vision was used to deal with rapidly changing street scenes. The car achieved speeds exceeding 175 km/h on the German Autobahn, with a mean distance between human interventions of 9 km. Despite being a research system without emphasis on long distance reliability, it drove up to 158 km without any human intervention.

Not too shabby for the 90s I'd say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: