You'll need to purchase a number of things more than the raspberry pi itself (camera, battery, car, servo controller board, wires, 3D print or purchase or build a mount for it all).
Good job for getting PWM steering right, I had only 3 state steering (full left, center, right) and it turned out to be quite a limitation in the end.
Stack exchange says it means 'all classifiers are trained jointly', nvidia and quora seem more in line with what I assume it means in this project. 'No human input to training sets or results as part of learning' is that what this talks about?
Is it then, unsupervised learning?
Really cool project!
In contrast to this, every significant AD effort uses hand crafted parts for most of their pipeline. Machine learning is mostly used in the initial object detection and tracking steps.
The car learns to steer itself on an empty road. It's a good experiment to witness the power of deep learning and neural nets. For autonomous vehicles though, you need much more than that (e.g. sensor fusion, obstacle detection, localization, behaviour prediction, trajectory prediction, path planning, motion control, etc.).
> Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. […] Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e. g., lane detection.
One can imagine that it might be more difficult to get a network to solve this large problem all at once, and that there might be easier to decompose the problem and solve each part. Would it be a good idea to guide the end-to-end system by first decomposing the problem and solving each part, then using that solution as a starting guess for the whole problem? I mean, the decomposition might perhaps be a reasonable approximation of how the whole problem should be solved. (Then again, it might not.)
Editing to add -- this is still a cool project. I don't mean to detract from it by pointing out that I think it could be done without the AI piece.
A true LIDAR system infers the round trip time of photons, either directly (time-of-flight) or based on some proxy like phase or frequency shift (AMCW/FMCW) LIDAR.
Lots of people include triangulation systems, which I disagree with. For example the RPLidar A2 is just a nicely packaged version of Kurt Konolige (et al)'s RevoLDS . It projects a laser spot and takes a picture of it. Add in some known geometry, and you can use triangulation to measure distances. It's easy to build, you can do it with any old camera and a laser pointer. It's incredible how overpriced the A2 is, given that the Revo was published as $30 system.
A true LIDAR is much more difficult to build. If you go time-of-flight then you need a good laser and excellent timing electronics. If you go phase-shift then you need beam mixing/separation optics, beam modulation and a phase detector. Neither of these are easy for the average hobbyist. If you're interested, buy a laser tape measure from Leica and have a look inside to see how it works.
I belive the LIDAR-Lite is a proper LIDAR system - it ranges up to 40m which is a good hint that it's probably a true time of flight system. The Scanse Sweep is a LIDAR-Lite turned into a 2D scanner, like the sibling post. A good indicator is price. Real LIDAR systems are expensive - starting at £1k typically for something like a Hokuyo. They also tend to be much better built than the hobbyist stuff.
In principle all you need is a pulsed laser, a big collecting lense, a fast rise-time photodiode and a good (picosecond accurate) timing circuit.
With a model train like setup, you could have construction, peds, etc.
You then create a neural net architecture that is being fed images and steering angle positions and outputs steering angle predictions. This is a regression problem that can be expressed in plain English as:
"First train the neural network with a collection of images and associated steering angles. After training, if I were to give the neural network a new image it has never seen before, what steering angle would the network predict?"
NVIDIA has a paper on it, and I blog about a similar coursework I complete as part of Udacity's self-driving car nanodegree