
How We Derive Context from Big Data - jisaacso
http://blog.urx.com/urx-blog/2015/11/6/how-urx-derives-context-from-big-data
======
timClicks
Well done for the engineering feat here. Parsing Wikipedia at scale isn't
easy.

Very surprised that they don't mention DBpedia though
([http://wiki.dbpedia.org/](http://wiki.dbpedia.org/)) or discuss how they
navigate through some big NLP challenges with Wikipedia, namely that many
edits are made by bots.

~~~
jisaacso
DBpedia is a great, structured representation of Wikipedia. Agreed, there are
many challenges, NLP being one. I think you'll like our other blog posts
[http://blog.urx.com/urx-blog/2015/7/28/named-entity-
recognit...](http://blog.urx.com/urx-blog/2015/7/28/named-entity-recognition-
examining-the-stanford-ner-tagger?rq=ner)

------
jph
Calculating context on mobile is a huge win for content providers, who can use
the context to create deep links into other apps.

Example: I'm reading an article about James Bond, and ideally the phone can
calculate that there's a new Bond movie, and I'm near a theater, and I'm free
on Thursday night, and then create a link to buy tickets.

~~~
jisaacso
Great example! This is the exact problem URX is solving: empowering content
providers to present their products within meaningful contexts.

