Hacker News new | past | comments | ask | show | jobs | submit login

Does this mean huge datasets are no longer a prerequisite for this type of computing? Leveling the playing field for smaller teams who may no longer have to rely on Google- or FB-sized datasets?



I don't think so. They are basically loosing entropy by going from the corrupted image to the original one that's more predictable.

I don't think you can do other things like labeling by using the same method




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: