Hiya!
I have a background in cognitive science which studies the mind. ANN and DNN are probably the hottest topics within ML that are suppose to model the mind.
One mission is to classify images. Since I am not well read on the matter I was simply curious in what aspects does todays models fall short in this topic?
Question:
Can a todays state of art neural network models seperate a red apple from a green one from having trained on lets say 1000 images of red and 0-1 green apples?
I know that one of the downsides with current models is that they demand a lot of data which our minds does not. If someone showed me 1000 different images of red apples, it would be enough to that that there exists green one, partly because I have a perception of what green is and can apply it. However, if I didn't know the color green, it would be sufficient to show me a image of green or a green apple to learn that fact.