EDIT: forgot I had put up all my stuff into a Github repo a few years ago: https://github.com/capnmidnight/optical-illusions-in-cg/blob...
I recently had the idea to start a like 1994-style Yahoo-style web site that only links to sites which fulfill these conditions:
a) they have a lot of quality content
b) they only have minimal or no ads (absolutely no signups, freemium sites, 14-day trials etc).
c) no obnoxious self-advertising of a company's own brand (e.g. "Will It Blend? | Presented by Blendtec").
d) (this one is hard to quantify, but it's a kind of "I know it when I see it" thing) they were made with love and a desire to spread thoughts and creations.. rather than a desire for making money.
Obviously this directory site would not be a vehicle for making money. I also don't want this to be a hipster coolness thing. It's not the retro-ugly layouts I'm after, it's the content.
Stupid? Does it already exist?
The site that I came across that triggered this thought was http://www.americanradiohistory.com/ .
Even this example is full of <FONT> tags and align="center" attributes.
Once it did, I found that many of the other illusions also didn't work. Also, I have a headache, and everything not on my screen seems slightly unfocused.
Clearly I tried a bit too hard!
(Honestly, though, that website is amazing. The "idiot" was purely for effect.)
thanks for reaffirming that it is possible and sparing me a headache.
Is anyone working on computer vision that is "tricked" by illusions? I'm curious because it seems like a good way to test how accurately computer vision maps to human vision (then again, I'm not even sure that's a goal!)
I worked on detecting the player's skin color for a recent computer game. We decided not to use an RGB camera for pretty much the same reason as seen in the strawberry illusion - the computer would see those strawberries as gray. An image with unknown lighting might look fine to a human but it's hard for a computer to estimate the actual color of objects without also seeing objects of known color, such as a color card.
Illusions are due to the deception of our vision priors. Our brain expects something, makes you see something this way, but it's not what is happening in reality (maybe because it was engineered to deceive those expectations, as the images in this thread link). This is because the mental model we have of a standard "sight" doesn't model well those examples, our brain was not trained to work this out, I guess because it has no advantages in doing so (in terms of evolution or learning as a kid). Our brain is only trained to extract information efficiently on "plausible images" (lit by sun-like light, taken on the earth, etc.), you can't feed it random noise or it will try to explain it with things it knows (which is called Pareidolia).
In machine learning vision, we re-learn, usually from scratch (or fine-tune), at each experiment. This generates (or modify) the priors learned. Think of the priors as the "default(s)" image (in terms of complex internal representation, not in terms of pixels) that helps you think about the problem at hand. If you have a motion detection/tracking problem, this optimal default information representation will be different from the default information most useful for classifying or segmenting.
What I want to say with those examples is that machine learning computer vision is prone to illusions, that is images that defeat (are too far away, or not well explained by) its internal representation space and/or default representation. Also, each algorithm (let it be neural networks, SVM, or anything, really) has a different internal representation, so different images will be illusions for them. An illusion for one model won't necessarily be an illusion for another one.
The thing is, we are far from mastering advanced machine learning, in the sense that we don't have optimality proofs for capacity, architecture and filters on deep neural networks for a given task, for example. There's a lot of recent research on those illusions--for example, adversarial examples or networks. It seems to indicate that those illusions are far from human vision illusions and seems to be due to the mathematical nature of machine learning, for example adding small noise (sometimes with a lower magnitude than the smallest representable value by standard images formats!) to a correctly classified image can result in a wrong and very certain prediction. The most proeminent viral example of this on the internet was the school bus becoming with high certainty an ostrich after some small noise was added to the image. Other examples can be found in the introduction of .
What causes the appearance of movement?