We've had several "customer feedback / intent / support case analysis" projects in the past. Some for large customers with millions of individual records (Autodesk), where there's the additional challenge of "What should the categories be in the first place? What's in the data?" (discovery).
What we learned is a model trained on one type of feedback will not necessarily perform well on others, because the relevant signals manifest differently across modalities: feedback length / writing style / typos, lexical richness / repetition / boilerplate, OCR noise / how long is the long tail… Your model may learn to pick up on cues that are orthogonal to the sentiment or categorization problem.
This is especially true for black box models (deep learning) where introspection is limited: Did the model learn to rely on syntax? Specific words or character ngrams? Exclamation marks? Something else? Does an Indian-looking name imply sentiment negativity?
Slapping a generic ML technique (Stanford NLP, Naive Bayes, bi-LSTM, whatever) onto a bunch of tokens is a reasonable first step, that's the low-hanging fruit. The tricky part is defining the problem space and the QA process correctly, and managing the devil that comes with the details.
That being said, there is a promising new research paper from fast.ai (https://arxiv.org/abs/1801.06146) that speaks to using Wikipedia data to create a language model for a specific language, which can then be trained specifically on the task you are trying to solve. If this is as effective as the authors state, then NLP could see huge improvements non-English languages where there is already a large set of Wikipedia data.
Something as simple as wanting to split a sentence into words can be difficult, e.g. you may want to be able to split German compound nouns into their component words, and to do that you need a model (or list) of nouns so that you can identify these. Or if you're doing Chinese which doesn't have spaces.
A bunch of the deep learning work starts from characters for this reason - you get to avoid that messy step, though in Chinese, characters may not quite be the best representation either, maybe you want to break down characters into their component radicals (I don't actually know this, I don't work on Chinese NLP, and have not run this theory past any chinese speakers)
But if you're not just throwing everything in a Char-LSTM, you may want to do things like lemmatization so that you can generalize across different forms of a word, or maybe you want to use lemmatization info to inform your tokenization, so that you don't lose the form info.
But, really, one big advantage of Neural Nets is that you don't need to do this, that you can just get a big pile of labels via MTurk/users and train on that without really needing to understand the language you're working on very deeply.
No you don't, if what you want to process is text. You're right however that a big problem is the segmentation that must happen before any processing and that cannot be done 100% correctly by software. Thus, errors compounds down the chain.
Did you read more than the title of the paper you linked? Because the Stanford paper states:
"Results and Discussion
We consistently observed a decrease in performance (i.e. increased for perplexity) with radicals as compared to baseline, in contrast to a significant increase in performance with part-of-speech tags. [...] Such a robust trend indicates that radicals are likely not actually very useful features in language modeling"
For most tasks, you won't get more information on a word by looking at its characters decompositions, in the same way that the individual letters of a lemma won't help you for the task.
There existing use cases however. It is useful when building dictionaries for human beings (for search for example, I just put online such a tool yesterday) and when trying to automatically guess the reading of a character.
I haven't really dug into these papers, though the Stanford paper does say "This conclusion is consistent with results from part-of-speech tagging experiments, where we found that radicals of previous word are not a helpful feature, although the radical of the current word is.", whereas the quote you pulled out has to do with language modeling.
Though I wouldn't consider a single negative result from before the deep learning trend took of necessarily indicative of the value.
The more recent paper, on the other hand, sees a positive boost from their "hierarchical radical embeddings" vs traditional word or character embeddings for 4 classification tasks. Not that this is necessarily meaningful either.
In my mind, the usefulness of this would be, not that you would get new information, per se, but that you could generalize some amount of knowledge to rare/out of vocabulary words.
Since you work in the field though, do you have any pointers to good papers on Chinese NLP?
- https://aclanthology.info/pdf/I/I05/I05-7002.pdf This paper make use of the radicals to build an ontology, but it does so with a stunting amount of depth (historical context, variants, etc.) that most works overlook. Too bad no data is available.
- http://www.persee.fr/doc/clao_0153-3320_1978_num_4_1_1047 Very interesting read on the formation of Chinese-like characters by the Vietnamese. Some technics described were also used by the Japanese when adopting sinograms.
- didn't read the paper but the references section lists a number of paper about the segmentation of Mandarin http://www.anthology.aclweb.org/F/F12/F12-3001.pdf
- didn't read it yet, but seems to contains accurate information of the Chinese writing system: http://learnlab.org/uploads/mypslc/publications/perfetti-lex...
Anyway, I think for getting a fair understanding of the writing system the learning of about 600 characters in either Chinese or Japanese + basic of the chosen language is required.
> How would the system handle something like "Your service is the sh* t!"
This is pretty easy to handle correctly with sufficient training data. A good demonstration is the deepmoji sentiment predictor: https://deepmoji.mit.edu/
Your service is sh* t!
Your service is shit!
Your service is the sh* t!
Your service is the shit!
Works pretty much perfectly.
Edit: how am I supposed to escape the * without leaving a space after it!?
Another task is understanding the semantics of expressions.
Some basic ones are
Man > Woman
Dog > cat
Monday > Tuesday
Disclaimer I currently work for an NLP company.
But you can query our knowledge base of semantic words here. With your stated languages and more.
For couple of our Norwegian and Spanish customers we hit Google translate to translate feedback into English and then feed it through our ML engine to classify. Accuracy obviously is not as good as it should but it gives them good insight.
I am not aware of a corpus for political sentiment specifically. There's a general twitter sentiment dataset, the link appears to be broken but it's what everyone cites, not sure why it's down.
This paper uses tweets and emoticons in the tweet as a soft label for sentiment, there's obvious issues with that, but it's a cheap way to get lots of noisy labels.
I was, I was guessing that you had an general topic/clause segmentation model + sentiment analysis, sounds like you're saying the topic/clause (or issue) segmentation model is pretty domain specific, so what new datasets you'd need to for political issues build is beyond me, but I think it'd be well worth it. Connecting politicians and constituents is a pretty universal need.
Despite this, we still run into some feedback that is complete gibberish, or does not refer to anything. Fortunately, since this is a multi-label classification problem, it is possible for us to classify the feedback as not having any tag associated with it. Therefore, including some of these samples in our training data helps fortify our engine against any live data that may come in without meaning, and allows us to classify that feedback as having no tag associated with it.
In our upcoming blog about our "human in the loop" machine learning system, we also address how we can manually filter samples of data to make our training more efficient.
As far as the domain affecting the algorithm, it can vary from some algorithms maintaining decent performance over most industries, to some algorithms working very well for some industries and terribly for others. Although it is just feedback, the topic of the feedback and even the ways people talk about the same topic (such as price of the product) will vary across each industry.