

Tom Mitchell: Never Ending Language Learning (2012) - bra-ket
https://www.youtube.com/watch?v=51q2IajH94A

======
bra-ket
I've seen this talk in person last week and there are some new developments,
in particular the coming up paper by EA Platanios and Mitchell 2014 where they
describe a new way to reconcile consistency with correctness, i.e. how do you
resolve conflicts between internally consistent but factually incorrect
beliefs, and train the machine to learn from its mistakes (currently they just
delete incorrect beliefs unlike human learners who incorporate their history
of "fails" into making new decisions). Besides other things, they work on
introducing time scale into learning and integrating NEIL (Never ending image
learner).

------
jeffreyrogers
Here is the project page:
[http://rtw.ml.cmu.edu/rtw/](http://rtw.ml.cmu.edu/rtw/)

If you scroll to the bottom you can see the "facts" this program has most
recently learned along with its confidence in them. To me these facts don't
look very impressive, since many are wrong or at least show a very stretched
approximation of the truth. (Edit: actually I refreshed the list and the
content got a lot better, so I might have just hit on a particularly bad set
of data)

