Hacker News new | past | comments | ask | show | jobs | submit login
How We Derive Context from Big Data (urx.com)
25 points by jisaacso on Nov 10, 2015 | hide | past | favorite | 4 comments



Well done for the engineering feat here. Parsing Wikipedia at scale isn't easy.

Very surprised that they don't mention DBpedia though (http://wiki.dbpedia.org/) or discuss how they navigate through some big NLP challenges with Wikipedia, namely that many edits are made by bots.


DBpedia is a great, structured representation of Wikipedia. Agreed, there are many challenges, NLP being one. I think you'll like our other blog posts http://blog.urx.com/urx-blog/2015/7/28/named-entity-recognit...


Calculating context on mobile is a huge win for content providers, who can use the context to create deep links into other apps.

Example: I'm reading an article about James Bond, and ideally the phone can calculate that there's a new Bond movie, and I'm near a theater, and I'm free on Thursday night, and then create a link to buy tickets.


Great example! This is the exact problem URX is solving: empowering content providers to present their products within meaningful contexts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: