This is amazing. Now just FYI, for those curious, some recent "old guard" innovations by pro camera makers for astrophotography include:
1. Leveraging the multi-axis image-stabilizing movement available to an in-camera DSLR sensor with GPS for the purpose of tracking the sky during a long-ish exposure to reduce star trails. Ricoh-Pentax Astrotracer. http://www.ricoh-imaging.co.jp/english/photo-life/astro/
I know this only because I've been researching a DSLR/mirrorless camera upgrade but also delaying it regularly while I'm reminded how excellent phone cameras have become! Unless you're a pro, a pixel-peeper, an artist, or just someone who simply enjoys the machinery and process.
I think what "has not been retouched or post-processed in any way" is supposed to convey is that no work is needed by the user of the camera app to get pictures of this quality. There's lots of in-camera processing of course. That's always been the case for digital cameras: sensors don't produce jpeg files.
The problem is that the definition used to match the capabilities. The capabilities changed. Should the definition? If I say I haven't retouched or post-processed a picture of myself, you used to be able to assume it hasn't been airbrushed or edited to make me look fitter. In fact, that used to be how you'd say it. Linguistically, it's extra weird when you're announcing a new auto-retouching and auto-post-processing feature.
Exactly. And heavy post-processing is necessary for even very simple things because a camera's sensors don't match the output picture all that well. See for example https://en.wikipedia.org/wiki/Bayer_filter#Demosaicing. The postprocessing has moved up the semantic stack and now cameras do things like face detection and smoothing, but there's always been some amount of postprocessing in digital photography.
What people are trying to say is that if this is your definition of post processing then there doesn't exist any digital camera system that produces photos without post processing. Indeed such a camera cannot exist.
I know, I work in digital image processing, and I'm an amateur astrophotographer. Demosaicing is not on the same level as star registration, stacking (posibly HDR processing in the middle), plus some more AI driven "magic" in between.
That's what I'm referring to: for the shown image "without post-processing" the phone has performed a lot more actions after demosaicing, that's why i say it was heaviliy processed.
Those lines have been blurring for a good while. Put an Insta filter on your photo? Tut tut, naughty! Your phone's camera app does the same thing for you?* Your phone sure has a good camera!
* Aggressive HDR, sharpening, automatic tint 'fixing', etc. Am I talking about filters or automatic photo processing in flagship phones.
I do get the sentiment in that statement, in that if you take a photo with that phone, you can get the same output without doing much else than pressing the capture button and being still. Very still.
That's the only way for mobiles to get decent results though. They're so physically limited (sensor size, fixed aperture, lens size, &c.) that the original files are horrendous to look at.
What google is doing with their pixel phones is magic, they turned a normal camera into a very very good one. I'm still impressed every time I use night sight. The drawback, as always, is that as soon as you zoom in or see the pictures on a large screen (or printed) it's painfully obvious that they're heavily post processed. It almost look like an abstract painting or an AI generated image (which it almost is).
If you're delaying a DSLR/mirrorless because the camera already in your pocket is good, I would suggest also looking into the highest end point-and-shoots. They're getting crazy useful, they shoot in RAW, they have most of the bells and whistles you'd want outside of physical stuff like changeable lens options. Mine is Good Enough™ for everything hobbyist and I'm super happy with it. And it fits in my pocket.
pixel peeper perhaps, but these images are nice when you look at them on a smartphone screen, print one or view on large screen and the pictures from a mobile phone give a whole different feeling and sense of quality.
This is what I understood: I think the software uses the GPS and camera orientation to calculate which way the sky moves relative to the lens. Then uses the camera's image stabilisation to track the sky, i.e. instead of moving the camera, move the sensor.
Yeah, though obviously since this is merely image stabilisation, and not a dedicated de-rotation, there are limits to how far the system can move the sensor and thus to how long the exposure can be.
Eh? The manufacturer datasheet states you can take at most a 300 second exposure with this rotation compensation system, are you arguing this that this limitation isn't a problem?
As a full frame DSLR shooter (Nikon D750 + 20mm F1.8), this is wild considering those crappy tiny sensors on phones. To get similar (albeit much, much sharper) results, I have to lug around 2kg of camera and lens plus bulky tripod.
Even with this, to get those dark dust clouds and stark colors some heavy postprocessing is required (which I mostly don't do because I consider it too much an alteration of original image, but it creates more interesting image). Don't think for a second that those superb images you can see everywhere are not literally over-painted in Photoshop (look at online tutorials on how to do it if you don't believe me).
I guess to make things impressive, google guys went to some proper remote desert far from any artificial light. And unless I missed something, they still used some tripod. In european alps, this kind of result is practically impossible - there is always some tiny village in every valley, and even if not light pollution seeps from far. One night panorama I have has quite strong glow coming from village of Chamonix some 15km far, that is on the other side of massive Mont Blanc range [1]. Anything can be achieved if you start playing a lot with Photoshop brushes, layers etc. but for me its one step too far.
Imagine what results can be had when such algorithms are paired to a full frame (or bigger) sensor!
> To get similar (albeit much, much sharper) results
Tbh it's not that hard to get sharper results, this was shot on a 40 years old camera, with a 50+ years old lens using a cheapo carbon fiber tripod ( 25+mph wind that probably rocked my camera quite a bit)
> And unless I missed something, they still used some tripod.
They did: "Clearly, this cannot work with a handheld camera; the phone would have to be placed on a tripod, a rock, or whatever else might be available to hold the camera steady."
Zoomed out, those photos aren’t terrible, but if you click to view a higher resolution version, you can see the softness created by the noise removal algorithm. Based on my experience, the image quality is comparable to DSLRs of about ten years ago. Noise removal is a bandaid. In astrophotography, it will virtually always look softer than reality actually is, and it will also have removed actual stellar objects. There is no substitute for a better, lower noise sensor, but you work with what you have.
Sensor size is inversely correlated with the noise level. Smaller sensors have more noise where the least noise is on full frame (i.e., 35mm) or larger sensors. Phones have small sensors.
This not just simple noise removal but combining multiple photos with the noise appearing in different places (aside from the hot pixels). So, it's a strategy to get much of the same effects of a really long eposure with much less noise than you'd expect based on the tiny sensor because it averages out over multiple shots. In principle of course possible to do with a DSLR as well. E.g. Hugin might be able to do this.
Of course regular noise cancellation and other very lossy processing
still kicks in after that (which may explain the blurry result). It would be interesting to look at the raw image produced by this.
I use open camera on my cheap Nokia 7 plus (which uses two cameras) and have been getting OK-ish results in Darktable. The dng file you get combines information from both sensors. One of them is black and white so these look really flat until you fix it in post processing. The raw photos have lots of noise (as you would expect) but noise filtering is pretty effective.
I imagine for this it would produce a dng with information from the different stills combined but none of the other post processing (except maybe hot pixel removal).
that's exactly what is amazing about this tech - 10 years ago you'd have to carry a backpack, now the same quality pictures can be taken with a multi-purpose device which fits in your pocket. nothing to complain about if you ask me!
I still get better photos with my almost 10 year old camera though.
UPD: oops, XT-1 is just 5 years old, actually. I guess I have to take my words back, sort of.
Still, the point is the same - as long as you make a shot for your Instagram account - your smartphone is alright. For anything bigger you still need a camera with decent lens.
Meh, I used to take better photos with a disposable pocket camera where you take off the shutter to allow for long exposure. But it’s a step in the right direction. Ultimately you would indeed need to filter a lot of thermic noise to allow for longer exposure on a CCD. (I started digital Astro photo years ago with an ST6 that needed to be cool down to -60C using a Pelletier module)
Not exactly, noise level is inversely correlated with the photosensitive element size. The smaller they are, the more noise you have.
For example an 8MPX APC sensor should have less noise than a 40MPX full frame sensor.
Per-pixel noise depends indeed on the pixel size, but image-scale noise is almost completely independent of the pixel size.
That's because while a large pixel has less noise, it appears larger in the output and so the noise is correspondingly more visible. A small pixel is noisier, but smaller, so the net noise is the same for the same sensor size.
This and sensor technology ie. MFT sized 12MP older sensors perform worse results than more recent 16MP MFT sensors which performs worse results than even more recent 20MP MFT sensors.
This is super cool tech. But I can't help thinking that this shows that the Pixel camera lead must be someone with an engineering background who got nerd sniped by this problem because it's cool and hard to do. Someone with a product background would've focused on something less niche.
On the other hand, some people who like Google like it because it still sometimes works on geeky, cool, fun stuff instead of being super product focused.
Night sight for normal night photography, absolutely. Taking pictures in low-light situations is common. Astrophotography on the other hand is much less common.
I can't read about commodity hardware stargazing without my mind drifting to 2bit Astrophotography¹ which used the Gameboy Camera².
If you compare a paper star chart from the Gameboy era to Stellarium³ and a Pixel, then it can only lead you to wonder where we will be in another 20 years.
I'd prefer to see Google focus on just making the Pixel line better everyday phones than working on exotic stuff like this or the motion sensor they added in the Pixel 4. Right now there's not much reason to choose a Pixel unless stock Android is your top priority.
What needs improvement? I have a Pixel 2 and it's a great everyday phone. The only thing I can think of is a bigger battery but that's a problem with every phone.
A few years ago, it seemed like the manufacturers were going haywire with terrible, clunky launchers that slowed down already slow hardware. A clean, stock Android seemed like the only option for reasonable performance, and there was a promise of timely updates.
Well, now hardware is fine and the "updates" are starting to feel less and less valuable. They no longer bring faster, cleaner interfaces. They just bring some new widgets and gizmos.
Now, I think the Pixel line is... OK. My wife and I have a Pixel 2 XL (used, ebay) and a Pixel 3 (spring sale for $400). And the 3a line is close to "everyday" pricing, especially when it goes on sale. But I'm starting to question whether it's worth sticking to Google's stock phones or if it's time to start cross-shopping competitors once again. But for so many of us, being able to snap photos and have them look pretty good is a nice comfort after the ugly early years of phones with cameras that required a lot of patience and persistence to use.
So in my opinion, there's value in putting resources into ensuring good photography, even if that's not the priority for every phone buyer. What are the best alternative phones with "good enough" cameras and "everyday" pricing?
I don't think Google has it in them. Something about their vertical integration, supply chain management, cross team coordination, etc etc. They just consistently put out phones that are subpar in the daily usability and quality control fronts. So I'm kinda glad they go for these exotic features. They know they have to stand out somehow.
Having Chrome on my phone or choice of any alternative web browser is big enough of a reason for me to pick Android. Besides, my experience with Android in the last couple of years has been more solid and consistent than iOS, even if the UI and animations are less polished.
I agree and I prefer the UX on Android in a lot of respects too. I just don’t think the Pixels have been a good value compared to other Android phones lately. They either need to be cheaper or better than they are now.
The photos are gorgeous and I would definitely use this mode.
Astrophotography always kind of rubs me the wrong way though because that's not how it looks. Even if you go out to somewhere that's really dark, like a large National Park, and wait for a clear night it's never going to look in your eye like it does on Instagram. Don't get me wrong, what you do see is absolutely magnificent, it's just not what's in those pictures.
Seeing the galaxy with your own eyes is one of the most majestic things you'll ever witness. It's something that has inspired spontaneous prayer throughout history. It doesn't really need a filter.
IMHO they've basically missed an opportunity here for many years. They'd be in a perfect position to offer those things, combined with a much better/larger sensor, which enables even better images. On smartphones it's a matter of necessity, as the sensor is (fairly) crappy in comparison, but on a DSLR it could still be a benefit. Personally I'd be perfectly happy to get a pre-processed DNG from the camera instead of having to do this afterwards. And then give me the raw files to do it manually as well.
Perhaps they're trying not to cannibalize their lower market segments or think that professionals would never use those things (on which they might be correct). But I can definitely see that computational photography beyond raw->JPEG conversion with a color profile could have its place in a DSLR.
While it would be nice to load some post-processing script onto the camera and get all that done by the camera without having to bother with it yourself, all that processing will cost battery life, as well as response time. Having to charge your DSLR every day is a huge cost in convenience, and making a dedicated imaging machine unresponsive would be terrible.
Additionally, you are always going to move your pictures out of the camera for proper viewing, so it doesn't make a lot of sense to provide the best possible picture already in the camera.
I don't know if the software they bundle with the camera is any good at this computational photography though (I've never checked to see if there are linux versions), but it better be if it indeed helps image quality.
Yeah, it's definitely nice to get ~1000 images per battery charge. But when uploading to the PC infrequently you'll then have to sort photos into groups to process individually (something I already hate with panoramas). Personally it's something I'd rather not do, at the expense of only getting ~200 photos per charge. It's also not necessary to compromise, as those special modes would be, well, special modes. So to preserve battery I could just as well shoot raw as normal.
Idk, I feel like we have far better software on our computers, and mirrorless/dslr's will always beat our phones in terms of raw specs.
This is more of a "Now anyone can capture decent astrophotography with just their phone!" then some revolutionary new thing if that makes sense? Basically they have to push their phone's sensor as far as it'll go and use software to remove the noise, instead of using a better lens/sensor setup (which is space and cost prohibitive for a phone).
You're still welcome to do extensive post-processing on your computer (I don't want my camera doing any processing), and indeed that's what any astro-photographer will do if they wanted.
That said I've captured some amazing night/moon photos with my A6000 that I have never been able to achieve before, and that's without any extra processing.
> why aren't DSLR/mirrorless manufactures doing it too?
Most of them are historical companies and are probably very old school / slow to adapt. Google has probably access to better software engineers than Nikon or Canon which seem to barely be able to develop a working bluetooth/wifi sync.
They think they're still hardware companies in a hardware-first world. The problem is we've started hitting some limits in the hardware which software can improve on dramatically, but they are terrible at software (for the most part).
The photos taken in this mode are amazing. But I think they are improved a lot if a vignetting correction is applied. Without that correction, the sky brightness is much greater near the center of the field than near the edges.
What are the possibilities that this AI camera app is not faking the image.
Here is why I ask
- It has gyro - so knows whether we are pointing at sky or not.
- AI checks whether its a clear sky - if yes - post fake image. If no - Dont risk getting caught.
- Time + geo spacing - Gives the angle, position of camera relative to the space above us.
There is no chance this is being faked. I've been playing with the astrophotography mode on a Pixel 4 XL in a remote country location with no cell phone connectivity. A single 4 minute exposure is able to record stars down to about magnitude 9.5. A single exposure can detect the Crab Nebula, or the two brightest satellite galaxies of the Andromeda Galaxy (M 32 and NGC 205). To fake such results without internet connectivity would mean that the camera software would have to have an internal catalog of about 250,000 stars, with accurate colors and coordinates. It would need the size and shape of nebulae, the contours and brightness of the Milky Way, etc. When I add multiple frames, I see fainter stars appear, as they should, so the internal catalog would really need millions of stars, most of which would not show up unless the user carefully aligned multiple individual image and summed them. This is not being faked.
You do realize that with 2 floats to store the position of a star on a sphere, that's like 8 MB per million of star?
With compression you can almost certainly get away with something like 2 bytes per stars.
And an other solution is to simply store a bitmap of the sphere around earth.
It also shows the positions of the planets correctly, so it would need to have an ephemeris, or planetary orbital elements. Meteors and satellites show trails across the star field. At some point simulating all of this would be more complicated than really doing it.
"Faking" isn't thinking about it right. There's no point in going through all the trouble of manually constructing a fake sky photo like that when you can just aggressively train a machine learning model to produce good-looking skies in photographs, which they appear to have done for the purposes of things like adjusting the brightness of the sky in night-time photos. In the end, it's not a question of whether it's "fake", just how much of the resulting photo is the invention of a neural net instead of the result of light hitting the sensor.
In the end photography like this is art, though, so if the person taking the shot is happy with it, then it's fine, probably. Just don't enter it in a competition with rules against retouching...
This reminds me of the argument over the moon landing: effectively it was easier to go there than to try to fake the live coverage, because the video technology to fake it wasn't available.
I think the point of GP is that astrophotography as demonstrated here is mostly implementable in software. If Google wanted, they could very well produce an app for iPhone that does all of the above.
It’s hardly a secret that innovation in smartphone cameras is mainly in software now. This software camera innovation is one of, or the, main area that phone manufacturers are competing on at the top end of the market, so characterising it as ‘easy’ seems strange.
It seems they have some ML stuff in there for specific features, e.g. sky / land light balance and hot pixel removal (probably similar to how denoising for MC path tracing works).
Aside: randomly recognised Ryan Geiss in the credits, he did the Milkdrop plugin for Winamp back in the day, and also some cool tech demos for Nvidia...
There's no machine learning involved for hot pixel removal. The way I have heard it described is that hot pixels stay in the same place across multiple images. One of the processing steps is to figure out the location of the hot pixels by looking at multiple frames. Stars will shift slightly in between the longer exposures where as hot pixels will not.
In that example the Pixel has a 6x longer exposure. While Pixels can currently get better night photos currently, it's simply because the software lets them take minutes-long exposures instead of seconds-long. From a hardware point of view I don't think there's an advantage.
Both the Pixel and iPhone have AMAZING camera hardware and software that is better than any others in their domain. It’s just that the two are roughly equal except for the exposure time used.
Yeah, but I think the software differences required for 30 seconds vs 3 minutes is minimal. 30 seconds is still long enough to have to deal with all the issues that would come up with a 3 minute exposure.
1. Leveraging the multi-axis image-stabilizing movement available to an in-camera DSLR sensor with GPS for the purpose of tracking the sky during a long-ish exposure to reduce star trails. Ricoh-Pentax Astrotracer. http://www.ricoh-imaging.co.jp/english/photo-life/astro/
2. Removing the camera's IR filter, allowing it to capture hydrogen-alpha rays (656nm). This captures energy (image, color) not otherwise seen by normal camera sensors. Canon EOS Ra. https://www.usa.canon.com/internet/portal/us/home/products/d...
I know this only because I've been researching a DSLR/mirrorless camera upgrade but also delaying it regularly while I'm reminded how excellent phone cameras have become! Unless you're a pro, a pixel-peeper, an artist, or just someone who simply enjoys the machinery and process.