Hacker News new | past | comments | ask | show | jobs | submit login
Phone cameras can take in more light than the human eye (theconversation.com)
39 points by rntn 3 months ago | hide | past | favorite | 64 comments



Brother Cavil: I don't want to be human! I want to see gamma rays! I want to hear X-rays! And I want to - I want to smell dark matter! Do you see the absurdity of what I am? I can't even express these things properly because I have to - I have to conceptualize complex ideas in this stupid limiting spoken language! But I know I want to reach out with something other than these prehensile paws! And feel the wind of a supernova flowing over me! I'm a machine! And I can know much more! I can experience so much more. But I'm trapped in this absurd body! And why? Because my five creators thought that God wanted it that way!


The resultant images are based upon intentional manipulation and exaggeration of the data available in a model designed to make the resultant image be as pleasing to the user as possible. They are to some extent lies / guesswork.

Its fast food art. Sprinkle the image with sugar and fat


More of certain types of light, importantly.

They don’t have the dynamic range of colors to capture all the colors our eyes can see, but they can pick up other types of light that we can’t.

This fact was pointed out in a lot of reviews of the Vision Pro. It’s just about good enough to make you forget you’re in augmented reality but the world looks more dull and lifeless because your eyes see more vibrant colors than small digital sensors.


I don't think this is true. The less than real feeling from the AVP is more likely from the display than the camera. The human eye is capable of a wider color gamut and (probably more importantly) a much wider dynamic range of intensity than the AVP display is capable of producing.


Yes. The cameras on the VP are optimized for weight and size and don't have as good color sensitivity as larger lenses. The microOLED screens on the VP also do not have the range than a normal OLED would. It's a limitation of the current tech.


> They don’t have the dynamic range of colors to capture all the colors our eyes can see

Are you sure about that?

My understanding is that digital cameras can capture an extremely wide color gamut -- the vibrant colors -- but that the extra information is necessarily thrown away when encoded in sRGB or P3 in your image/video file.

Because we don't have many displays which are capable of showing those ultra-saturated colors, so we don't waste bits on storing them in files either. P3 is an obvious improvement over sRGB that Apple products mostly use now.

If you process RAW files directly from the sensor, I'm pretty sure you get a much wider gamut.

What I'm not entirely sure about is how that gamut maps to the colors our eyes can see -- it's not going to be exactly the same coverage. So I'm not sure where it goes beyond human eye sensitivity, or where it doesn't quite reach it, and to what extent this depends on the sensor technology (e.g. what's in your phone vs. an expensive DSLR vs. a professional cinema camera). Ultimately it's going to come down to the exact precise shape of the frequency curve in each of the R, G and B filters in the color filter array, and how they can be mathematically translated to reflect the human eye's [1, 2].

(And so color vibrancy limitations with the Vision Pro are absolutely going to be coming from the displays before they come from the cameras.)

[1] https://en.wikipedia.org/wiki/Color_filter_array#Image_senso...

[2] https://en.wikipedia.org/wiki/Color_management#Color_transfo...


Yep, phone cameras can pick up IR light which is very useful for testing whether a TV remote works or not!


Newer smartphones have an IR filter in the lens assembly so this doesn't really work anymore


It works with my iPhone 15 Pro.

I do not see any visible light emitting from the remote. This is in a pitch dark room.

Edit: proof, taken in a semi-lit room for clarity: https://imgur.com/a/63FbXjQ

Edit 2: Tested on iPhone 15 Pro selfie camera. It detects IR at the same intensity.

Edit 3: Same as above with a Pixel 8.

Edit 4: I now have my office involved, this is fun. Same with a Samsung S24.

Final Edit: OK, I just went through about 20 colleague's phones, which are various mixes of iPhone and Android, new and old. Testing the front and back camera, every camera on every phone saw the IR light.

If a smartphone camera does not see IR light, this appears to be the exception, not the rule. OK, back to work!

These are all US devices, in case that matters. We don't seem to have an agency that regulates light emissions from cell phones.


Some phones have an IR filter only on the main camera, not on the selfie camera. So if you really want this to work, try the selfie camera instead.


See my updated comment above. I just tested a bunch of phones, and all of the cameras front and back could see the IR light.

I acknowledge that I may be the only person in the world that is this interested in this fact.


I've always been fascinated by this topic as well. As a further experiment, you may be interested to know that these IR lights can pass straight through red wine that looks totally dark and opaque to the human eye. I took some photos to demonstrate this with a DSLR with the IR filter removed here [1], but you can test this yourself by using a smartphone to look at the IR light of a TV remote with a glass of red wine in between them.

[1] https://alexbock.github.io/blog/nir-water-red-wine-compariso...


I've been pondering some sort of "night vision"[1] system for my car after noticing how excellent low light is on my VIOFO dash cam.

The dash cam has a video-out port, but unfortunately it appears to be NTSC resolution. I'd love some sort of setup that outputs to >=8" 1080p display attached to my dash. It would help so much in my rural area with wildlife in the road, as well as the constant random pedestrian walking on an unlit rural highway in dark clothing.

Ideally, if I could get great quality, low noise low light video like the VIOFO, I could then start playing with object identification with OpenCV.

1. I worked on such spectral systems in a past military life but don't want to attach something big like a Cadillac FLIR unit, or something expensive, like nearly every viable consumer FLIR option. All the "affordable" consumer FLIR options suffer from low resolution and/or low response time.


> All the "affordable" consumer FLIR options suffer from low resolution and/or low response time.

When I experimented with connecting two thermal cameras to a VR headset for stereo thermal vision, I used two Seek CompactPRO FastFrame units. They're 320x240@15Hz for $400 which is a lot more usable than the typical 80x60@9Hz consumer thermal, and it's easy to integrate the Android model into custom applications. They also have a 320x240@25Hz model for $1000.

I'm still impatiently waiting for affordable 640x480 thermal cameras, but in my opinion 320x240 at moderate frame rate is past the good-enough threshold to be legitimately useful for high contrast situations like identifying warm-blooded life on the side of a rural road.

> I'd love some sort of setup that outputs to >=8" 1080p display attached to my dash.

The Tesla Cybertruck has an option to display the view from the front bumper camera on the 18.5" main screen, but front camera display is unfortunately not available in any of Tesla's other models. With the proliferation of large touchscreens and camera arrays, more vehicles may support this from the factory soon.


Any reason why evolution didn't give us IR vision too?


Since we're warm-blooded animals, maybe it's because IR vision would be swamped by the emissions of the eye itself?


Doesn't the eye lens solve that by concentrating multiple rays onto one point?


"Infra-red" covers a bunch of wavelengths. 'Near Infra-red' is very close to visible light, and is what's used by remote controls and picked up by cell phone camera sensors. It's around the 0.75–1.4 μm range. On the other hand, thermal cameras are sensitive to 'Long-wavelength infrared' which is around 8–15 μm wavelength.

Marketers of things like CCTV cameras love to sow confusion about these things, as NIR-sensitive cameras are extremely cheap while thermal cameras are comparatively expensive.

Humans are not sensitive to NIR because for the vast majority of human existence, any time there was NIR light there was also an abundance of normal visible light, due to a little thing called "the sun"


Idk, I think it would be nice to have non shit night vision.


Non-shit night vision, the kind where warm things glow at night, requires LWIR sensors that are at a temperature lower than the warm thing (otherwise the sensor itself glows).

NIR sensitivity does not improve night vision. That mostly requires a "tapetum lucidum" or reflective layer, more rods than cones (less helpful when you want to see colors in daylight) or just larger, more biologically expensive and vulnerable eyes than necessary.


> NIR sensitivity does not improve night vision

Your comment and its grandparent are both excellent, but I would like to nitpick a bit here. Technically, NIR sensitivity does improve night vision through the simple mechanism of "detect more wavelengths = detect more total light". It's not thermal imaging, but it is the reason that CCTV cameras often mechanically move their IR filter out of the way when the light falls below some threshold - at that point, sensitivity is more important than color fidelity, so they accept all the light they can get, even nonvisible. Of course this is also often coupled with NIR illumination LEDs.


Not knowing if the remote batteries were dead wasn't a big driver of reproduction a million years ago, mostly.


Yes! I came specifically to post "yeah but... dynamic range". As someone heavily into video production and imaging in general, we're still a long way from capturing the overall fidelity of the human visual system. The current state of what high-end consumer gear delivers to end users in HDR and WCG (wide color gamut) is still quite disappointing compared to the marketing claims and technical specs.

It requires a surprising amount of expertise, effort and money to assemble, calibrate and feed content to a high-end home theater sufficient to correctly deliver even the (currently modest) full capabilities of 4k HDR10+ / DolbyVi$ion. And the home theater content ecosystem is still a mess of mixed, partial implementation of supposed standards.

Implementation of HDR/WCG for web media is currently even less evolved. Despite many newer, high-end laptop, computer and phone screens having decent capabilities, inconsistent file formats, browsers, drivers and OS implementations seem far away from this "just working" for end users.


I took a photo of a painting I recently purchased and the image on my phone is crisper and the color contrast is more pleasing to my eye than the actual painting. This really surprised me.


That probably has more to do with the display you're viewing the image on than the actual capture.


Also most of the modern phone cameras use AI and other means to optimize picture by default for more pleasant for human eye. That is their selling point.


During the recent aurora activity, I took multiple pictures from my back yard using my iPhone 15 Pro to see if it would show up. It didn't, but I was blown away by just how many stars were visible in the resultant photos. I need to go back and see how many Messier objects I can get to show up that way.


The aurora was visible for me and it was remarkable how much better it looked through my phone. What were faint grey lines, at first not even something I would think was the aurora if it were any other day, came out as vibrant green and purple through the screen; reminded me of the They Live glasses.


The aurora really surprised me that way: I had only seen pictures before, and never guessed so little of it would be visible in real life. The effect is definitely better seen through a camera!


That's always going to be true of the night sky though -- and is the goal of astrophotography.

That's not to say that there aren't some amazing aurorae that you can experience with the naked eye, if you go closer to the poles (well, practically just the Northern one) on a good night you won't need a camera!

Even for those, presumably a camera will catch more stuff in the background though.


Conversely, my experience with the 2016 and 2024 total solar eclipses was the exact opposite. I was expecting reality to look less impressive than eclipse photos, but nope. You look up in the sky there's a damn ring of fire up there.


I looked up and saw the deep blue sky, and stars, and a Black Hole Sun... An unnaturally dark void, surrounded with light

Quite unnerving


Aurora magic is possible only because of computational photography. There is no way a phone camera, at the base level, can see more than an eye.


Why do you say that? Or do you call long exposures computational?


They are not long exposures. If they are, you would see star trails, because earth is spinning.


Rather depends upon what you consider "long" to mean. The sun moves about 15 degrees per hour and the angular field of view on a zoomed iPhone 13 shot is about 23 degrees (according to a blog). 12MP resolution so crudely moves about one pixel per second. A ten second exposure is certainly long compared to the light gathering drive by the eye, but a ten pixel elongation of the blob of a bright star won't be very obvious, may be rather less than the smearing caused by atmospheric "seeing"


Whoa, you're just way off base here. You can take a long exposure and avoid star trails depending on a couple of factors, primarily the focal length of the lens. The longer the lens, the less time it takes to start seeing trails. The wider the lens, the longer you can take. I've taken up to 45s exposures with a 20mm lens on a DSLR with no trails. Since most lenses on camera phones are typically wider angle, the limiting factor is having a support to hold it for longer exposures.


Some sources (https://lweb.cfa.harvard.edu/webscope/activities/pdfs/eyeTel..., page 7) say the human eye's "exposure time" is around 1/15th of a second. So a 1 second exposure is 15 times longer and won't see star trails, at least not at normal zoom levels.


You absolutely see star trails on iphone night mode photos when setting exposure to 10s or more. For aurora, 10s is way too much though, 1-3s seems right and is what the iphone allows to do when not on a stable surface or tripod.


Am I the only one bothered by:

"As a professor of computational photography, I’ve seen how the latest smartphone features overcome the limitations of human vision."

combined with:

"AI allow these devices to capture stunning images"

?

I'm not an expert, but every time I casually read about how computational photography works, I get the impression it is some very clever image processing algorithms (eg, pixel alignment of multiple images, anti shake, combining multiple images to enhance dynamic range, depth perception, detecting under and over exposure in parts of the image) put together by a software engineering team made up of entirely of humans. Occasionally there may be an AI detecting faces, smiles and blinks for things like timing and framing - but not in the aurora photos the article features.

Which means I'm looking at an instance of the term "AI" evolving in meaning in popular culture. Software engineers have a very particular set of statistical techniques and algorithm's when they use the term AI. The new meaning seems to be "any set of computational gymnastics it would require an expert to understand", or worse just: "big complex algorithm".

Sigh. It makes me sad. But then so did OS changing it's meaning from "a layer of software that abstracted the details of the underlying hardware from user space programs" to "Android" / "iOS" / "Windows". The old, more formal, precise and limited definition we software engineers invented is now at best jargon, but in reality probably gone. When I see younger programmers use the term "OS" here, they invariably mean the common definition.


For this reason, you can also use them (or web cam) to make a DIY spectrometer (or just buy one pre-made [0] for £130.00).

[0] https://chriswesley.org/spectrometer.htm


I got an Google Pixel pro 8 a few weeks back and holy shit it's good.

I stoped using my Canon DSLR 80D this holiday and we took a handful benchmark picture I will analyse later.

The night mode in dark areas wow. Tele? Wow. Macro? Awesome.


Have they fixed their skin algos yet? I quit using Pixels I think around 5 or 6 because of how unnatural they make people. I'm a typical older white male, I have white skin, odd freckles and other 'spots', some redness here and there like a typical person.

When a picture of me is taken with a Pixel, I have unnaturally uniform tan skin and blemish free. And this is without any 'beauty filter' or whatnot.


Same. For me it was white skin turning gray and blue eyes turning black. Not always but frequent enough for me to buy a different phone (which doesn't have this problem).


I have not seen anything unnatural and I hate this type of adjustment but I'm very very impressed so far


There is one issue. The image you see from your phone display is not "original" anymore in traditional sense. It is enhanced in so many ways that it looks pleasant, but it can also alter things. Even unnoticed, by guessing what could look good and what you want to see.


I have a pixel 8 pro and a canon r5 and honestly, unless I'm shooting at f2.0 or less, I generally prefer the pixel images. Not to mention they're instantly sharable.


Try birding at 500mm or more. Pixel soup (pun intended) vs crisp details from the R5 with 100-500, even at f7.1 or higher. No comparison.

Even at 105mm f4 the difference is night and day. Portrait mode is good enough for casual use, but the real thing still looks better.


Sure, it's weaker in telephoto. (I rarely shoot beyond 85mm.) Fwiw I never use portrait mode -- too artificial. The main lens still gets pretty nice shallow DOF tho


Integration time! (Or “ai computational photography” on noisy images.)


Those night photos are purposefully to set to "record" more light into ONE FRAME than the human eye records per frame. I am so annoyed at this analogy.


Phone cameras will always produce muddy shit which looks like bad oil paintings. My Fujifilm XT-30II makes better photos at ISO 6400 than phones do at base ISO.


Before smartphone computational photography came along -- in the conventional camera era -- it used to be a given that the eye was better than the sensor, or film.


What? When was this a thing. The main advantage to the sensor or film is that ability to do long exposures capturing way more photons on the same spot than your eye can do in real time. This really just seems like a confusion of the entire concept to me really.


Before Google and Apple's forays into computational photography, long exposures used to be associated with blurring, which was not a substitute for what the human eye could offer, unless you were taking pictures of static objects.


who what when where?

Please, don't confuse your misconception of what dragging the shutter speed would do in your day to day usage vs what professions do with long exposure.

Sure, adding motion blur is an effect one can achieve with long exposure. You need some extra things like ND filters to avoid over exposing, but that a technique someone is specifically looking to achieve. It's not a mistake from something my mom would do.

We've been using long exposure for things like light painting well before Apple/Google. We've been using long exposure for astrophotography for a long time as well. I still feel utter respect for Hubble et al night after night manually guiding the scope to keep the image in frame. In fact, the same frame of film would be exposed for multiple hours each night for multiple nights before finally being developed.

What you've associated things with in your mind does not mean the rest of the world only associates the same things in the same way.


A long exposure by definition means leaving the shutter open longer, causing blurring unless you are using computational photography or, as I said, the object is static. Apple and Google have eliminated the blurring that inevitably came with long exposures, in the realm of consumer photography. Of course, scientists had been able to independently obtain similar results using techniques similar to Google and Apple.

Light painting is not about compensating for low light. On the contrary, it typically involves reducing the ambient light in order to manually "paint" with a light bulb for artistic effect.

Which part of the above are you debating?


Any one attempting long exposure is not trying to do it in a tracking sense. They are doing it for a static shot. It is a deliberate decision to leave the shutter open. If they are doing that in the day time to add "motion" to the image by having the blurring, it is a deliberate decision. You attempting to take a static picture in low lighting conditions where the camera's auto settings are dragging the shutter and unable to take a proper picture because you can't hold it still long enough for that shutter is not the same thing. At all.

I like your "reducing the ambient" to really let me know you have no clue what I'm referring to, but that's okay. this whole thread is just wasted energy


"Except if the subject is static" is most of long exposure photos.

Blurring is frequently the goal. Blurring waterfalls is a popular technique. As is blurring the ocean. Or streaking of light from cars. Searching for "long exposure" shows these techniques.

They use tripods to keep some things static. Smartphones can make those photos just as well as cameras with the computational photography turned off. The limitation is using neutral density filter to make exposure work.


Yes, historically it is most of them, and I am saying that is now no longer a technical necessity, but an artistic choice.


long exposures don't inherently mean blurring. The camera or mount can move to track a moving object, keeping it sharp. Astrophotography has been and still is done without computational photography.


> and still is done without computational photography.

um, ackshully, lots of computational editing is being done now. The image of Sag A* was heavily computed. They are now having to use computed images to remove all of the made on earth objects in the sky from Starlink to planes. There's a lot of stuff done to astroimages now and computational editing is making its way there.


when i said that it is still done without computational editing, i didn't mean that all astrophotography is done without it. i means that some is done without it


And yet iPhone photos still look awful because of over-processing. That one split second when you see the beautiful photo you took and then BOOM "deep fusion" makes it looks crispy and gross.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: