Hacker News new | past | comments | ask | show | jobs | submit login

I would not be sure that mapping of discrete linguistic objects to a continuous space is necessary. Why can't we handle the original space?

There are just a lot of things that have to be figured out still.

+ Different time scales. There is semantics on a sentence level while there is also semantics on a plot level. It's convenient to know key elements from the start of a story if you want to understand the plot. LSTMs are a perfect starting point.

+ When to stop learning. The so-called stability-plasticity dilemma. Our ability to pay attention to what matters might be tightly linked to our capability to forget vast bodies of texts that we just read. Current NNs do not seem to forget correctly. This was the rationale behind ART and ARTMAP (Grossberg) and might enter AI mainstream again soon.

+ Grammar constructions. Some aspects of grammar seem simpler than computer vision, where we also have a lot of structure in the environment, models like things that can be inside of other things, be balanced on top of other things, temporarily occluded by other things, etc. Other aspects seem more complicated, like the pleasantness of a poem. My gut feeling is that some of this gets spilled over from (a) structure in other modalities and (b) idiosyncrasies from our generative system (vocal cords, etc.). In other words, our grammatical preferences might be sampled not only from listening and reading.

+ Emphasis.

Just a few things that might lead to interesting NNs. Contrary to the author I think they are definitely in line with current research.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: