Hacker News new | past | comments | ask | show | jobs | submit login

You can't just look at segmentation tasks, but the availability of good labeling and sufficient training data. Given enough data of sufficient quality capturing the variability of your domain, and good quality labeling on it, you can probably make a deep model perform quite well. Failing to have these inputs, you will often do better with other techniques (there are many).

Furthermore, if you have a decent physical model and/or some constraints (e.g. lots of industrial QA, cell counting, etc.) with a fixed FOV etc., you can do quite well with classic approaches and that can be quite robust. Some of the deep models you see performing well on e.g. small curated sets for conferences just don't generalize well at all, which isn't surprising given the setup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: