Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
jpster
on Dec 1, 2017
|
parent
|
context
|
favorite
| on:
Deep image prior 'learns' on just one image
Does this mean huge datasets are no longer a prerequisite for this type of computing? Leveling the playing field for smaller teams who may no longer have to rely on Google- or FB-sized datasets?
cdancette
on Dec 1, 2017
[–]
I don't think so. They are basically loosing entropy by going from the corrupted image to the original one that's more predictable.
I don't think you can do other things like labeling by using the same method
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: