Hacker News new | comments | show | ask | jobs | submit login

> The point they're making ML practicioners acceptable at every task. You learn ML and you can have great indexes for your databases, inverse kinematics, cancer treatments, and youtube movie generation that actually gets views, all using the same theory.

Just to make sure I'm understanding you right - the claim is that you want an ML person on your team (or you want everyone on your team to know ML the way that everyone on your team knows calculus and Shakespeare, i.e., so they can brush up on it as needed), not that human domain knowledge in ML replaces human domain knowledge of what a database does, or what inverse kinematics is, or how to measure cancer, or how to film movies or draw animations, right?

The latter is definitely a potential future, but I don't think we're there yet (but I might be wrong!).




We're definitely not there yet. Right now it's just one domain after another (slowly) getting replaced by ML.

It'll expand though.


But I don't think these are domains getting replaced by ML, any more than domains were replaced by calculus, or machinery, or even computers. Sure, cancer researchers need to know something about programming now, but they also continue to need to know more things about biology. Is that different for ML?


It kind of is. For a specific example, a decade ago face recognition, speech recognition and machine translation each incorporated a lot of domain-specific knowledge and were largely disjoint fields, since doing anything remotely useful required the specialized knowledge that the subfield had accumulated over the last decade; but nowadays a face recognition ML expert can achieve state of art speech recognition while not knowing a thing about phonetics, and a speech recognition expert can train a decent end-to-end MT solution while being ignorant about all the language/text specific cases that required particular attention and specialized solutions and subsystems some 5 years ago.

This is quite a big practical shift in ML; we moved from research on specialized solutions for separate problems to generalized solutions that work on large classes of problems with minimal adaptation, highly reducing the need for domain-specific knowledge. I mean, it's still usually useful, but not absolutely necessary as it was before.


Don't you need to have domain expertise anyway to engineer your features? What exactly will you train your speech recognition models on, if you have no understanding at all of speech?

It may definitely lower the barrier of entry to those fields, but I'm not sure it has removed it altogether, at least not just yet.


With modern deep learning methods, you often can get state of art results without any feature engineering whatsoever - you do need a decent efficient representation of the raw data, but that's about it.

That's the whole point, in many domains the accumulated knowledge about feature engineering isn't necessary anymore, since you can train a deep network to learn the same features implicitly from data, and (depending on the problem) it's quite possible that the learned features in the initial layers will be better than anything people have engineered before.

For your speech example, speech recognition models used to contain explicit phonetics systems (where the chosen set of phonemes mattered), separating acoustic models and language models. But now you can also get decent results from an end-to-end system, by throwing all the phonetic knowledge you had into the trash bin and training an end-to-end model from the sound data (though, not a waveform but a frequency domain representation, e.g. https://en.wikipedia.org/wiki/Mel-frequency_cepstrum - but that's not "understanding of speech", it's understanding of audio data format) straight to the output text characters.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: