Hacker News new | past | comments | ask | show | jobs | submit login

Academic incentives are awkward. Computer scientists and other quants find it difficult to build careers in this stuff as it's often either too derivative to be cutting-edge ML ('you applied last year's tools to an education problem we don't care about, yay?') or too reductive to be useful in the target discipline ('you've got an automated system to tell us how newspaper articles are framed? Lovely, but we've been studying this with grad students reading them for 30 years and already have a much more nuanced view than your experimental system can give us').

That, and it's difficult to sell results to people who don't understand your methods, which by definition is practically everybody when making the first new applications in a field.

I'm a social scientist who can code, and even stuff like SVMs can be a difficult sale outside of the community of methodologists.

You're also spot on about annotated training data. The framing example above is my current problem, and there is one (1) existing annotated dataset, annotated with a version of frames which is useless for my substantive interest. Imagenet this is not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: