
Testing the Resolution of the Human Eye - brk
http://ipvm.com/report/testing_the_resolution_of_the_human_eye
======
hughes
This is misleading on several levels. The size of an image (measured in MP)
has almost nothing to do with the angular resolving power of a camera, and
even less to do with the resolution of the human eye.

The article didn't mention that every camera tested likely has a slightly
different focal length / field of view. Two cameras with the same resolution
but different focal lengths will score very differently in these tests. For
example, a 32mm lens will see much more detail on the chart than a 14mm lens,
even with the same resolution sensor.

Additionally, none of these cameras have the peripheral vision capabilities of
the human eye. Our brains pick up quite a lot of information outside the tiny
rectangle that these cameras are able to capture. The resolving power of the
eye also changes from the center of our vision to the periphery - just because
we can read the E on the chart in the middle of our eye doesn't mean we can
read it from the edge of our vision.

Comparing eyes to cameras at this level of simplicity is mostly meaningless.

~~~
Sharlin
People don't often realize how poor our visual acuity is everywhere outside
the fovea, the area of sharp central vision a few degrees wide (that's a few
fingers held together at an arm's length). We need saccades and microsaccades
and lots of processing power and plain guesswork on part of the brain to patch
together the image we "see". The resolution of a camera sensor, on the other
hand, is not a function of the angular distance from the center of the image.

[http://en.wikipedia.org/wiki/Fovea_centralis](http://en.wikipedia.org/wiki/Fovea_centralis)

[http://en.wikipedia.org/wiki/Saccade](http://en.wikipedia.org/wiki/Saccade)

[http://en.wikipedia.org/wiki/Microsaccade](http://en.wikipedia.org/wiki/Microsaccade)

~~~
elwell
This is an awesome comment!

------
petercooper
But can you see the entire line of that chart _without moving your eyes at
all_? In terms of a single 'snapshot' of our vision, the resolution is
atrociously bad. Only a very small area in the middle of our field of view has
significant detail (test this by looking at some writing in the distance then
stare at the first letter and try 'reading' the rest of the writing without
moving your eyes - it'll just remain an indistinct blur).

~~~
vacri
If we're making it a competition, I'll note that the human eye natively
supports edge detection, and it operates in a much wider range of light. Eyes
also come pre-packaged in a format that allows for quite accurate
determination of distance at no extra charge. While cameras have a slightly
broader spectrum, eyes work in a far greater range of brightness - and seeing
in the near dark is far more useful than seeing the LED on the end of a
remote. Eyes are perfectly mobile, and don't require external power. Eyes have
a much faster traverse and can move to and focus on a target much more
quickly. If you're turning, eyes will give you a stable image of the world
around you to help keep your reference - cameras will just give you motion
blur. Eyes are very difficult to replace, admittedly, but they have pretty
astonishing self-healing characteristics and last for decades of constant use
- in the rain, snow, desert heat, even underwater (though admittedly the image
isn't great). Not to mention that if you roll your cameras, people will just
be confused as to your emotional state, and of course, it's harder to feel a
knot of love when staring into your significant other's IP cam...

------
sukuriant
I've been reading the comments and something stuck out to me. It's not
uncommon for people to have 20/15 vision with corrective lenses. The lenses
don't zoom the image in, as far as I can tell; and so, the 20/20 number we
often come up with has nothing to do with the resolution of the eye as a
sensor, but more to do with the focusing power of the lens. It might be more
useful to consider 20/15 or 20/10 as what humans can see, since the lens is
whats failing on the eye, not the sensor in the back. To that end, too, it'd
be nice to see some really good glass to mitigate the lenses failings there.
There are charts that relate lens sharpness to sensor acuity. As long as the
lens sharpness is approximately equal or better, we should have an accurate
result.

Also, I have 20/15 vision with contacts in and both eyes open (weaker with
just one eye open, as is normal); and, when I can see letters, I can see them
pretty well defined. I'm not sure how much of that is mental post-processing,
though.

------
VladRussian2
it is very probably that using GIMP "curves" they would be able to bring a lot
of details on 5MP and 10MP "blind" images. The digital cams lack dynamic
range. A "blind" dark image is very possibly somthing like this rough
simplified illustration - 0 where black and 1 [out of 255] where white. To us
both values look black, ie. very close to 0 vs. 255. Using a tool like
"curves" one can remap 1 to say 100 while 0 is still 0 and this will
frequently make a reasonable, though somewhat noisy image.

Or in other words, they judged performance of camera vs. eyes by using human
eyes. It would be interesting if they send the images to, for example, OCR.

------
TrainedMonkey
Very interesting. However this only tells half the story about human vision,
what about color depth, dynamic vision range, and speed (flash something fast
on screen)?

~~~
Someone
Half? More like 1% or less. Dynamic range is incredible (1,000,000:1 according
to
[http://en.wikipedia.org/wiki/Human_eye#Dynamic_range](http://en.wikipedia.org/wiki/Human_eye#Dynamic_range)),
one should test vision in darkness peripherally (rods have lower resolution,
but higher sensitivity than the cones in the fovea. There are claims humans
can see individual photons), etc.

Even limiting one to these experiments, test subjects should dark adapt for at
least one hour before a measurement (given that te article doesn't mention it,
I think it is unlikely that was done). That can make a huge difference.

Some (and more, including "The brain contains 10^12 neurons, 10^14 of which
are in the cerebellum.") of the numbers you ask for are in
[http://white.stanford.edu/~brian/numbers/node2.html](http://white.stanford.edu/~brian/numbers/node2.html)

~~~
nitrogen
Is the neuron number meant to be a joke, or is the 10^14 being greater than
10^12 a typo?

~~~
Someone
It is a joke, and an indicator of how little we really know. In the same vein,
that text has:

 _Number of neurons in the brain in 1974: 10 billion_

 _Number of neurons in the brain in 1994: 100 billion_

(Wikipedia gives 85 billion on
[http://en.wikipedia.org/wiki/List_of_animals_by_number_of_ne...](http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons))

------
jhonovich
As the editor of the post, I'd love to get technical feedback and happy to
answer any questions.

~~~
jerf
1\. As others sugges, if you have a RAW setting on your cameras, you should
use that. If you don't, you should probably get one that does.

2\. I'm going to disagree with the naysayers who say this is useless. You can
partially defang them by pointing out that while this doesn't fully test a
camera against a human eye, it is a reasonable test on its own merits and
generally answers the question that people are _actually asking_ when they ask
"what's the resolution of the human eye?" (After all, the difference in
capabilities goes both ways; sure, human eyes are a great deal more sensitive
on many measures, but would someone please show me their eye's 10x optical
zoom? (Hello to the 2040s!))

A further observation that "megapixels" is not useful would be called for,
though. We're really comparing angular resolution for a certain set of
settings. But we can accept "megapixels" as a slang term if we also fix
certain other camera parameters to ones common to consumer cameras in the
market.

3\. Using eye charts is a good idea, I think. I've seen comparisons that try
to compare rod density on the human eye, but that's really hard. I'd consider
adding a note that standardized eye charts measure what humans can _really
see_ rather than try to compare two extremely different hardware platforms
directly.

~~~
jhonovich
We are using IP cameras, that's our area of research / focus. As such, those
types of cameras so not have a raw option.

We used H.264 with default quantization. We could have certainly used MJPEG,
as these types of cameras support that but in our preliminary testing, it made
no difference.

That said, I do agree that a completely uncompressed, raw, image could make a
modest difference. For our domain, IP cameras, we just don't deal with raw
images.

------
farber
I know the tests were done by a company selling IP cameras but there are many
cameras available which would make better competitors to the human eye. For
instance the Leaf Aptus-II 12 which produces images with a size of 80
Megapixels.

~~~
jhonovich
We do not sell any IP cameras. All we 'sell' is research and testing.

Our focus is on IP cameras only, ergo the type of cameras we selected.

------
tedsanders
Fun fact: The human eye sees in higher resolution for red and green than for
blue. Blue light is scattered more as it passes through the eyeball (and as an
evolutionary result, blue detectors are sparser on the retina).

------
boon
There are also algorithms that convert the raw data from the sensor into a
compressed stream. That would downgrade the quality of the camera.

Though, that does make me wonder if our meat software does something similar.

------
abstrakraft
First sentence: All sorts of wild guesses and theoretical calculations exist
about what the resolution of the human eye is, with 576MP a common claim.

576MP is not a resolution. Article immediately closed.

~~~
svachalek
Resolutions are commonly measured in pixels. Or are you expecting something
rectangular and ISO standard like 1920 x 1080?

Divide by pi, take the square root, multiply by two, we're talking about 27000
pixels in diameter.

That said, I don't agree with a lot of the methodology. If pixels are what
we're going for, it only makes sense to me to count rods and cones.

~~~
colanderman
No, I think the GP means _angular resolution_ – i.e. pixels _per unit of solid
angle_ – which one might presume is the figure of merit here.

~~~
Dylan16807
It's trivial to transform the units, and it's easier to understand megapixels
or xxxxp. Terrible reason to close the article.

~~~
colanderman
_It 's trivial to transform the units_

…except they're not dimensionally compatible. You need to know the solid angle
subtended by those pixels. That's like saying you can trivially transform 60
miles per hour into a distance. You can't, unless you have time.

More importantly, the solid angle in question is not obvious – are we talking
just the fovea (area of the eye with highest density of cones [1]), the eye as
a whole, or the solid area subtended by a standard computer monitor, or…?

[1] [http://hyperphysics.phy-
astr.gsu.edu/hbase/vision/rodcone.ht...](http://hyperphysics.phy-
astr.gsu.edu/hbase/vision/rodcone.html#c2)

~~~
Dylan16807
The angle is the field of view. Because of the eye's uneven resolution you
need to have some idea of how they're handling the field of view no matter
what.

It's like saying '60 miles per hour from Vermont to West Virginia'. There's
some fuzziness about where exactly you put the end points, but you have a good
idea.

Basically, using raw angular resolution would _also_ be too vague without
context, but if you have context then megapixels are sufficient.

------
salient
That's a little more than 4k (8.3MP). I think I was expecting it to be a
little higher than that.

