
Traffic Sign Recognition with TensorFlow - jonbaer
https://medium.com/@waleedka/traffic-sign-recognition-with-tensorflow-629dffc391a6#.ifsyvpm67
======
repsilat
Funny that self-driving cars will become widespread years before self-driving
trains will. Trains don't have steering wheels, can't swerve to miss
pedestrians, don't have to obey roadside traffic officers' directions, don't
have to see road markings... All they need to do is look at bright signals in
a small set of known locations.

Meanwhile the rail industry wants to delay, spend hundreds of millions of
dollars to move to state-of-the-art new signalling systems. Fine, sure, do
that, but in the meantime you can save a truckload of cash (and improve
safety, and reduce lateness) by replacing each driver with an iPhone and a
suction cup.

~~~
GordonS
In the UK at least, there is a lot of resistance to automation from the
unions. They are well organised, and able to essentially hold the rail
companies to randsom until they back down on automation plans.

~~~
repsilat
First thing to do is sell it as driver-assistance technology. More drivers
move to the "just open and close the doors" role (aka the "man and a dog"
factory of the future model.)

Then you can let drivers just not be replaced when they retire. If the drivers
union kicks up a fuss and they strike, well, great -- ask them to take the
next day off too.

------
pakl
If he ever tries to train deeper models in this manner and test them on real
video frames from a car, he will be in for some unpleasant surprises.

Learning to map discrete snapshots of objects to labels won't yield a system
that can deal with e.g., the reality of lighting conditions that a car will
experience.

~~~
waleedka
Author here. This first part is simple by design. It's targeted to those
getting started in the field.

~~~
pakl
Oh-- my comment wasn't about the simplicity of the first part. (In fact, this
is a great tutorial, thanks for posting.)

My comment is about the approach of using supervised learning to map directly
from images to category labels.

~~~
waleedka
I'm not sure I'm following. Are you saying that using supervised learning is
the wrong approach here? What would you use instead?

~~~
pakl
Yes, that's right! The way you are using supervised learning here will force
the neural networks to map from textures directly to human labels. A purely
feedforward network, no matter how deep, can only rote memorize the effects of
the world on the images (viewing angle, lighting, etc) and will not
generalize.

Another shortcoming of feedforward nets is they cannot change how they
interpret local features based on integrated global aspects of a scene, like
ambient lighting or backlighting.

As a result the network will fail to classify on new real world images.

If instead you use recurrence to learn features that take the dynamical and
global effects into account, you'll have a better chance of success. One
example of how we did his is here [1].

[1] [http://blog.piekniewski.info/2016/11/04/predictive-vision-
in...](http://blog.piekniewski.info/2016/11/04/predictive-vision-in-a-
nutshell/)

~~~
nomel
I know very little of machine learning, so...

It seems that your system is supervised for the initial training. Once the
system is somewhat trained, is it possible to let it free with unsupervised
training, say if the confidence is in some higher range, between some frames?
For example, say there was a period of frames with very high confidence, some
slightly occlusion or shadow that lowered the confidence, and then another
period of high confidence. With something like motion prediction, and some
confidence in where the sign was, could you use that period of lower
confidence to help train, maybe with some verification from a knows,
complicated, supervised data set?

tldr; Are there methods to allow these systems to keep learning once they're
deployed?

edit: And this may interest you, the brain appears to predict motion:
[https://whitneylab.berkeley.edu/people/gerrit/MausNijhawan.P...](https://whitneylab.berkeley.edu/people/gerrit/MausNijhawan.PsychScience.2008.pdf)

~~~
felippee
Its rather the opposite. It is unsupervised initially, just learns to predict
its input. Note there is confusion: the unit itself is using supervised (by
the future signal) but all in all nothing needs to be labeled cause the
reality just unfolds. Then, once this is done, one can use the trained
features (representations) to train supervised tasks, such as the street sign
tracking.

PS: yes, there is a strong literature suggesting that the brain is predicting
a bunch of things. Check this long review paper
[http://www.fil.ion.ucl.ac.uk/~karl/Whatever%20next.pdf](http://www.fil.ion.ucl.ac.uk/~karl/Whatever%20next.pdf)
for plenty ideas and details.

~~~
nomel
Oh, wow. So really this allows prediction of a complex input, in a general
sense?

If I understand correctly, in context of that visual example, if it were
trained with a moving camera and a static scene, then its prediction would
only be able to predict scene transformations caused by that moving camera.
Maybe this explains how the tracking somewhat fails when the ball is moving
along the grass towards the end the scene. It doesn't "know" much about moving
objects, just moving cameras? So then training with moving objects, would let
it predict those, as well?

In what the video is showing, if it can predict perspective transforms from
camera movement, like it seems to be doing, does that means it's making
something like a 3d model, or something like a depth map used for its motion
prediction, somewhere in there?

I would love to see the error video of some sort of rotating 3d wireframe that
it was trained to.

This whole approach of a general "predictor" seems extremely incredible.

------
therobot24
is this just a write up on the udacity class project?

~~~
greenpizza13
I was wondering the same. He doesn't present it that way.

------
amelius
Is it actually allowed to use ML in an autonomous vehicle?

~~~
ma2rten
It would be completely impossible to build an autonomous vehicle without ML.
The question is how you use it in a way that is reliable and fail safe.

------
hash-set
Loved the article, but holy hell medium.com is a bastion of dishonest
journalism! I'd move the tensorflow article somewhere else!

