The problem is of course, that we still map these different effects to the human perceptible colors, and then use a human brain and a ontology shaped by human perception to understand the resulting picture. So this needs a footnote in the form of Nagel's What is it like to be a bat? [1]
Hacker News is a funny place. I saw the story and thought "Ha! That is only a small part of it. They should read Nagel's 'What is it like to be a bat?'". Of course the top rated comment would already say that - obvious really. Also sometimes sentences here end with ?'". and make perfect (I think) sense.
Heh, you state the problem more concisely than I did, though we have the same objection. I haven't read that - it looks interesting! Saving for later, thanks for the link. Although the footnote notation seems superfluous given its location in your comment...
Ahh - this stuff always disappoints me. It's so simplistic to suggest that animals like bees see just like us, but with a different color scale or something. The different wavelength sensitivity is one thing, but the entire experience of vision would be fundamentally different for an insect - from the basic optics of ommatidia to the faster perception of time and a dozen other things. Nautilus is a good site but this type of article I can simply never take seriously - like right-brain/left-brain stuff, for me it feels too transformative of the facts to be really informative.
I would think the most direct approach to exploring other animals' "experience of vision", would be to use the machine-learning algorithms we've trained to extract sense data from the human optic-nerve/neocortex, run them on recordings of other animal-brains, and see what we get.
unfortunately, the raw data coming from the eye is very much just that, raw. the experience of vision is only completed after much processing in the sensory cortex. There's a huge amount of metadata that gets mixed in with the stream - things like edge detection, attention, motion prediction, and so on. And even that's going to differ from person to person.
Although it would be very interesting if we could hijack an optic nerve stream, calibrate it using a test image, and then attempt to recreate the visual processing units. Unfortunately this falls prey to the same fundamental problems in some ways - we will only be able to interpret the signal of a mouse or cat in a way that we understand. That is, we would attempt to recreate a signal that is coherent to us and makes sense to our visual system, but that isn't really representative of their actual visual experience.
But it was informing enough for others (like me) less informed than you. Actually I knew about faster perception of time on insects and probably many other sparse details, but this article does its share to provide information in an accessible an attractive way. Give it credit for that (along with your criticism).
Certainly - and the way of showing it with the sliders is nice. I have a habit of pointing out problems instead of the good bits - usually because the good bits have already been acknowledged a few pixels away, you know?
I face this issue a lot too as a writer who has to take on complex science and make it understandable to a broader audience - something like quantum computing, for instance, or solar concentrator efficiency. It's difficult! And sometimes the details have to take a back seat. But with a lot of vision and cognition stuff I feel that fundamental concepts go by the wayside in order to make it more easily digestible. I think many people could stand and might appreciate taking it to the next level.
Interesting. Reading about tetrachromatic birds and infrared-vision snakes couldn't help but wonder if there is an animal out there with ability to (naturally) sense the whole spectrum. Mantis Shrimp has however, a bee-like compound eye and I guess it suffers from the same resolution-penalty.
Wouldn't that mean that x-rays move faster than light (well, light in a vacuum) when they're in something other than a vacuum? Doesn't sound quite right.
This spurred me to think about how you could detect X-rays, though.
X-rays get absorbed really quickly in things like water, so it'd be pretty hard to sense them directly with an eye like ours, but I can imagine a sensory apparatus on the skin or behind a thin transparent (to X-rays) layer that would detect them.
Another cool option would be to take advantage of all of the free electrons that an x-ray creates as it bounces around getting absorbed in a solid or liquid. If you had a sensory apparatus that consisted of a conductive liquid with a voltage across it, you'd generate a current whenever an x-ray hit. This is similar to how things like geiger counters work, although I doubt an animal would be able to generate enough voltage to create an avalanche effect amplification.
I bet there are other ways, but that's two off of the top of my head.
"That vision mechanism comes at a price — bees’ eyes have extremely low resolution, so their vision is very blurred. Nilsson calls this design “the most stupid way of using the space available for an eye.”"
The article forgets that the compound eye allows bees to have a much higher "frame rate" per "pixel" than a human eye ever could have.
there's also the whole thing where a compound eye enables the capture of a full light field, e.g. like the Lytro camera which uses a "plenoptic" system to capture image data that can be re-focused or used to create 3D images after being taken.
Which rather implies that a bee has a full 360 degree constant 3D model of its environment, but without the heavy memory and processing requirements needed in human vision to get that.
"And even though they have incredible color-changing skills — going from beige to blood-red or striped in the blink of an eye — cuttlefish are totally colorblind."
Which raises the question of how cuttlefish mimicry works - how can it imitate what it does not perceive? There must be some feedback mechanism, what signals does it use?
Are there cameras capable of taking pictures in a broader spectrum of colours available for less than a small fortune? I'd like to see photos taken infrared - ultraviolet 'compressed' into our red-blue range.
Although cameras have built-in filters that block UV and IR light, the IR portion is often weak and allows some light through. By adding a filter[1] that blocks visible light, there is often enough IR left to expose a photo[2]. With some caveats[3], it can be a cheap way to see the world in a different part of the spectrum.
[1] Hoya R72 for example
[2] A tripod is usually necessary since only a fraction of the light makes it through the filters. Even in bright sunlight a long exposure time may be required.
[3] Not all lenses work well in IR. Some have internal reflections that lead to "hot spots" or white patches, and some have bad optics for IR that causes blur. Similarly, some cameras' IR cut filters are too strong to be useable.
The CMOS and CCD sensors in digital cameras are sensitive to infrared and (near) UV. These sensors nearly always come with infrared and UV filters to keep them from recording things you don't see. If you are willing and able to hack your camera you can remove this filter to create a camera sensitive from near UV to true IR, ca. 200nm to 900 nm. You'll have to change the lens to make use of the UV part of the spectrum as optical glass blocks most of it.
I'm surprised to find out that human vision is among the sharpest. I didn't knew humans outperform over other animals with something else other than brain and long-distance running.
Some of them will be impossible because digital images are already limited to human visible ranges, so you don't have infrared or ultraviolet values. But there could be filters for transformations within our ranges.
I think it is an interesting and cool piece of news but not impressive. I think it is more impressive to think that human beings have the same eyes but we see a different World..
[1] http://organizations.utep.edu/Portals/1475/nagel_bat.pdf