For those familiar, the software I used (called Alpha_1) was originally written in 1980 as a testbed for the Oslo Algorithm, and basically a NURBS toolkit. The Oslo Algorithm would later pave the way for 3D rendering and ray tracing of NURBS as well as 3D CADD/CAM software.
This reminded me of a story about a course in MIT were the prof taught about visual perception. Maybe other HN'ers will be able to fill in the details. What was striking to me was how quickly the brain adapts to distortion in our visual field.
The prof would make students catch a ball with prisms before their eyes which shifted the image laterally. They would initially miss. But soon the students would adapt, and find nothing wrong with what they saw. These chages persisted. So when the prisms were removed, real world seemed shifted for a few minutes and they had difficulty navigating. Same with inverting lenses.
But what struck me most was that near the beginning of the course he demonstrated a curious case of visual adaptation and never mentioned it again till the end of the semester. At the end of the semester he showed the class that the student's brain had still not unlearned the distortion. Something related to seeing colors. I do not remember clearly.
It will be great if some one can fills in the details. Particularly about the semester long, persistent adaptation.
Mark Fairchild’s book Color Appearance Models has a nice couple of pages on high-level adaptation effects. Try the example on the page after this link for a demonstration of differing spatial frequency adaptation in each half of the visual field.
There are some animated gifs floating around the web that you can stare at for a few minutes and then leave yourself with really weird perception of everything you see moving around depending on its color and orientation, sometimes lasting quite a long while.
This property led to non-redundant effects being reported by people who had used computer monitors with uniformly colored phosphors to do word processing. These monitors were popular in the 1980s, and commonly showed text as green on black. People noticed later when reading text of the same spatial frequency, in a book say, that it looked pink. Also a horizontal grating of the same spatial frequency as the horizontal lines of the induction text (such as the horizontal stripes on the letters "IBM" on the envelope for early floppy disks) looked pink.
I didn't notice this until after the contraption was turned and he started fiddling with the balls again, but if you watch closely, you can see that they roll "up" different ramps at different rates: almost a second along the lower right ramp, yet barely a half second along the upper left. This wouldn't happen if the ramps were all the same length and incline, as they're meant to appear.
Sometimes, you just need to know what to look for...
http://home.utah.edu/~u0386084/hsci_old/belvedere.gif
http://home.utah.edu/~u0386084/hsci_old/belvedere_side_1.jpg
http://home.utah.edu/~u0386084/hsci_old/belvederelg.jpg
For those familiar, the software I used (called Alpha_1) was originally written in 1980 as a testbed for the Oslo Algorithm, and basically a NURBS toolkit. The Oslo Algorithm would later pave the way for 3D rendering and ray tracing of NURBS as well as 3D CADD/CAM software.