http://norvig.com/mwlb.html - because it stays focused on language it manages to make a clear and convincing case
 http://www.nytimes.com/books/first/l/lakoff-philosophy.html - by comparison a flawed but nonetheless good read. I agree with the critiques laid out this review: http://lesswrong.com/lw/871/review_of_lakoff_johnson_philoso...
Thanks for the tip, I'll check out "Louder Than Words"
A less poetic example of privileged information: if you're training on time-series information, you can include events from the future in the training examples, even though they won't be available while making predictions in production.
Apparently this helps the machine learning algorithm find the outlying data points when the data isn't linearly separable.
In particular, I was unsure after reading the original article whether the additional information--for example, the poetry--was available to the learner on test inputs. The above slides explicitly state that it is not.
What's an example of pseudocode that would actually implement this? Surely you don't load a natural language module in order to parse the pathologist's notes (in the example given in the reference about biopsies)?
(I should also note that the original article is devoid of any technical examples, making it completely opaque to me what it actually entails.)
Here's a directory containing the data for the number learning example:
It really is a very innovative approach IMO.
These different distributions are difficult to distinguish with statistics. However, if you can see the shape snd therefore know the "rule" or the structure of the distribution, it's easy to design/train a network to recognise them.
(note that the content of that PDF is not about this problem per se)
If this is how it works, it's a shame this isn't made clearer in the texts. You could get someone to tag images whimsically but consistently, rather than go the whole trouble of writing poetry.
“In many cases, humans use their own knowledge about
actions to recognize those actions in others,” he told
me. “If a robot knows how to grasp, it has better chance
of recognizing grasping actions of a person,” he said.
Metta is going one step further, by teaching iCub to
follow social cues, like directing its attention to
an object when someone looks at it. These cues become
another channel through which to collect privileged
information about a complex world.
If we can mimic and model the neuronal pathways and firings within humans, using AI, then we should also be able to study the relationship between the failure and degradation of pathways. AI pathways are subject to break and fail as well, though the causes are of course different. However, it would be imagined that there must be some shared structural stresses that result in the "unlearning" and failure to fire or function.
The potential for AI is immeasurable. If we can teach a robot, surely we can stress its system enough to "unteach" it. Despite being unable to force-feed it junk-food and/or vitamins/minerals, we can replicate environmental stressors, and once having generated the "unlearning" process, examine how to halt the degradation and perhaps reverse the trajectory.
And it's a good thing he didn't - violins don't have frets /nitpick
I'll get my coat...