Hacker News new | past | comments | ask | show | jobs | submit login
Update to the “Samsung space zoom moon shots are fake” (reddit.com)
234 points by ibreakphotos 10 days ago | hide | past | favorite | 122 comments

It will be quite weird to have this brief blip in the anthropological record of like 1980-2010 where cameras were cheap enough that lots of real images were captured, and then 2010-2019 where everyone (in rich western countries at least) had cameras all the time and we got tons of real images of everyday life (filters are popular of course but using them is an intentional choice).

Then everyone goes inside for like 2 years (pandemic), there isn’t much to take pictures of, and then by the time we start going back outside everything is actually paintings by AI artists.

When the AI generated content outnumbers human content 10,000 to 1, and the quality is almost ridiculously varied and sometimes incredible, and no functional system can sustainably authenticate humans without being overrun by bots…

Then the record from that point forward and forever hence will be one of AI, not one of humanity.

Arguably this is already true of text. We have already entered the age where it is impossible to determine with certainty whether a blog article was written by a human, and so every text data set that includes 2023 internet data will include ChatGPT outputs as ground truth.

Future historians will likely place a great deal of emphasis on 2019 being the year in which we reached a forever-maximum of non-pandemic-tinged provably-human-created data. It's no reason to panic, and the mixed data sets will absolutely be representative of humanity's accomplishments as a whole... but there will be notable differences.

> We have already entered the age where it is impossible to determine with certainty whether a blog article was written by a human, and so every text data set that includes 2023 internet data will include ChatGPT outputs as ground truth.

Is that any worse than unreliable humans? Whether they're intentionally lying or just mistaken, internet posts should always be taken with a grain of salt, for decades now.

For historians its a catastrophe. After 2023, the good old roman source poison assumation is back in full effect. He who is in power can actually rewrite history. Assume everyone uses written documentation just to proof loyalty to the current computational power holder.

Truth will become something similar to the roman emperors history, a million comments, blog articles condemning past rulers and CEOs as wrong all along, while praising the current instance, with similar spam in the forgotten archives.

A species ruled by madmen, one worser then the other. Praise be on the Bezos, who undid the mad world that reigned before.

I think they mean ground truth in terms of what human created content looks like, not ground truth in terms of what is actually going on in the world, which of course we don’t collect completely accurately.

I wonder what will be the first year when most of the meaningful comments on hacker news will be AI generated.

I dunno, seems hard to say. Upvote systems favor funny quips and matching the popular sentiment so I guess we won’t get a good measurement of meaningfulness.

The unreliability of humans is still valuable data because it shows what humans were thinking at the time.

I guess, but that's a really noisy signal once you account for deliberately misleading posts.

Even deliberately misleading posts give you insight into what people are thinking at the time, though.

> Is that any worse than unreliable humans?

Yes, I think so. Maybe not (but maybe) in terms of factual information, but in terms of opinion, analysis, arts, etc. You know, human things. The value of those things is, in my opinion, tightly couples with them being produced by humans.

I fear the social effects. The days of making long distance friends or relationships over the internet are over. Anonymous communities that don't implement strong KYC are over.

What is KYC?

Know Your Customer

It may be a great thing. We have in-person events vs. completely fabricated online wonderland. It might - I hope - mean that people flock back to IRL activities.

OTOH, I have several very deep, meaningful and valuable friendships, spanning decades, with people I've never been physically near. Losing the ability to develop those sorts of relationships is a very real loss.

And I have never stopped seeing my IRL friends in person. Those, I almost never talk with electronically.

Which isn't great for those of us with very little ability to attend activities outside of the home.

It's not that hard to filter content by source. Reputable news sources and authors will not publish AI-generated content (without grave risks to their reputation). And most of the abundant background noise have never provided much value before either.

> Reputable news sources and authors will not publish AI-generated content...

This news sources are not just one blob or a black box. It's different people that are working for money. And we know reputable sources that have published fictionary stories as real over years(see Spiegel in Germany). Now it's really easy to unnotable use AI, so where is the reason why this will not happen?

cnet published a bunch of unlabeled ai-generated articles for months https://www.theverge.com/2023/1/20/23564311/cnet-pausing-ai-...

the nyt just published a long interview with sydney in which she attempted to convince the author his wife didn't love him, which is ai-generated content; while that was clearly labeled, it will still appear in text data sets as ground truth

abundant background noise provides a lot of value for text data sets because it tells you how people talk

In all fairness, cnet hasn't been a reputable news source for a very long time.

I’ve yet to have ChatGPT produce useable content for my technical writing, or even lighter portions of proposal writing, rec letters, emails etc. I found it has done well at cleaning up text, fun language games, and quickly sourcing readily available information. I don’t think 2023 is the year books are replaced (maybe low novelty blogs). I feel a disconnect with how GPT is being exalted, and my underwhelming experience for useable content.

Or…art and literature enter a new two-tiered era with artistic and literary works made by humans, with other humans as witnesses to some or all of the process, becoming extreme-premium-items alongside AI-generated works, which are likely to become the fare of mainstream society, if the current wave continues unabated.

I suppose the veracity of history could be an excellent block chain application.

> everything is actually paintings by AI artists

It just occurred to me that we're actually going through the opposite transition outlined in Pratchett's discworld. Oh the irony.

> Down in the cellar Otto Chriek picked up the dark light iconograph and looked at it again. Then he scratched it with a long pale finger, as if trying to remove something. 'Strange . . ,' he said. The imp hadn't imagined it, he knew. Imps had no imagination whatsoever. They didn't know how to lie.

From Terry Pratchett's "The Truth"

I have the same thought about all artistic content. We've had a period of hundreds of years where we were advanced enough to reliably record, preserve and duplicate creative works but not so much that they were influenced by camera tech, photoshop, generative AI, LLMs and all other digital craziness. This reliance will only continue to increase as the tech gets better. After a point will it be too difficult to ever produce anything unique? Is the golden age of human creativity behind us?

Reading further into that thread, someone provides the OP with a blurry image of the moon but with craters in the wrong places: the "AI enhanced" photo is indeed much sharper and detailed than the original but it still retains the craters and land features in the wrong place. This suggests that the processing isn't simply replacing the moon with real, high-quality moon photos but rather that it knows how the moon, or a moon-like object, should look and creates detail on the original image.

This looks a lot like what DLSS does in GPUs.

I'm torn on this: it's fake but it's a good way of getting around optics limitations in certain scenarios. Is fake detail on a photo worse than no detail for non-scientific, consumer devices?

Samsung's own spokesperon said this is an explicit feature of the phone.

> But I also want to include one caveat: the S21 Ultra’s Scene Optimizer will not suddenly make all 100x zoom photos look as crispy as the Moon. Samsung flat-out says the Scene Optimizer can recognize “more than 30 scenes.” That includes the following according to a spokesperson:

> Food, Portraits, Flowers, Indoor scenes, Animals, Landscapes, Greenery, Trees, Sky, Mountains, Beaches, Sunrises and sunsets, Watersides, Street scenes, Night scenes, Waterfalls, Snow, Birds, Backlit, Text, Clothing, Vehicle, Shoe, Dog, Face, Drink, Stage, Baby, People, Cat, Moon.

If it was just the normal camera 'faking' photos it'd be bad, but they literally offer a list of stuff that their zoom tab will AI-optimize. You can still use the regular camera to turn this feature off.

To me this sounds like a cool but simply misunderstood feature.

Reminds me of the "Camera Restricta" [1].

I encourage everyone interested in tech and photography to check out Villem Flusser's Towards A Philosophy of Photography [2] and of course Walter Benjamin's The Work of Art in the Age of Mechanical Reproduction [3].

[1] https://hackaday.com/2017/01/28/camera-restricta-ensures-ori... [2] (1984) PDF http://imagineallthepeople.info/Flusser_TowardsAPhilosophyof... [3] (1932) PDF https://web.mit.edu/allanmc/www/benjamin.pdf


will be interesting to see how that works with less common animals.

I can see it getting confused with, for example, a capybara, thinking it's a dog, and filling in the details making a picture of a weird hybrid dogybara.

I offer you a capydoga.

This is one of those rare time google actually returns 0 results. 0. Well played.

not for long

a decade from now kids will all be getting named with not-yet-on-google spellings of their names, to help distinguish them from AI bot personas of the same name. idiocracy calls it once again (Upgrayedd)

> Is fake detail on a photo worse than no detail for non-scientific, consumer devices?

If the people seeing it don't know that it's fake detail, then yes, it's actively worse than no detail.

didn't seem like they tried it with a blurry picture of the fake moon though. it doesn't have to be either/or, it can be both.

You could let the user pick

You can by turning off scene optimizer.

I'm amazed at this New York Times article from 1984 narrating about a "not-too-distant future" that is actually happening now:


> In the not-too-distant future, realistic-looking images will probably have to be labeled, like words, as either fiction or nonfiction, because it may be impossible to tell them apart. We may have to rely on the image maker, and not the image, to tell us into which category certain pictures fall.

Even before AI that was true. France has a law that mandates every photoshopped picture of people used in advertising to be labeled as doctored. They just put it in tiny text on the side, but every single French ad with a person on it does say "Photo Retouchée" (because of course they all are)

I wonder if that applies to (automatic) in-camera filters/effects nowadays.

Simplified summary for those scared off by a link to Reddit:

He made an image of an intentionally blurry moon, and then used the camera to take a picture of it while it was displayed on the monitor from across the room. The resulting picture was of a sharply detailed moon!

This is an update to that post, with a new image with a moon and a half image of the moon together in one image.

The full moon is enhanced, while the half image remains blurry.

Which shows you it's not "AI"; it's more like some hacky code running right inside the phone which detects a plausible moon-like object in the image and fills it with pixels from a canned image.

This is a lot like the ages-old trick of a compiler being rigged to recognize particular source file (benchmark program) and pulls out a hand-optimized assembly program.

> Which shows you it's not "AI"; it's more like some hacky code running right inside the phone which detects a plausible moon-like object in the image

The hacky code runing right inside the phones detecting objects, identifying faces and whatnot is often called AI.

Fucking single if statements are called AI these days. That's not a strong argument. :)

My Samsung tumble dryer proudly tells me every time I turn it on, “optimised by AI”. But it’s not smart enough to know it’s told me that a hundred times before.

"Optimized by AI" is the 2020's version of "Fuzzy Logic"

To be fair, "optimized by AI" doesn't claim that the machine contains AI. Might just be "at some stage in the design process, AI was involved in some way". Which is probably for lots of hardware today.

That's because AI is an awful expression :|

Should have always been called "fancy algorithms" from the get-go, calling it "intelligence" opens the interpretation up for a world of misunderstandings

Do neural networks fall into the algorithm category?

How do you do squinting between an engineered process and a discovered relationship. Because the neural networks are much closer to discovered than engineered.

Yes. Being engineered or not is completely orthogonal to the discussion

>Because the neural networks are much closer to discovered than engineered

So is most code. Programmers in general can't engineer for shit, (half blindly) guided trial and error is the most common development paradigm

Check a comment in the Reddit comment thread. The author of the post repeats the experiment with a blurred and modified picture of the moon that has craters in wrong places. The phone produced a photo of the moon that is less blurry, but also has craters in wrong places. This suggests it's not just using a canned image.

> This is a lot like the ages-old trick of a compiler being rigged to recognize particular source file (benchmark program) and pulls out a hand-optimized assembly program.

I never knew compilers did this, but I remember it being the case for some games (There was a debacle once upon a time where renaming 'Quake3.exe' to 'Quack3.exe' would reveal such shenanigans on some drivers.)

A less-evil example is that most compilers can recognize when you wrote a memcpy() and replace it with an actual memcpy() call, which is usually very carefully optimized for your specific processor.

This can cause fun issues like your version of memcpy() (e.g. embedded libc) being replaced by the compiler with a call to itself: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56888

The ability to figure out what you're doing and then just do that well, rather than attempting to optimise the code you wrote is called Idiom Recognition and has been around for a very long time, like, 1970s or maybe before.

A relatively modern example was people writing code to count set bits in a value, because older languages don't provide an intrinsic to do that and it's often useful, these days your CPU probably has a CPU instruction specifically for that, it's called "popcount" (sometimes spelled popcnt) and so it's valuable to recognise what's going on in the say six lines of C++ in your program and emit a single POPCOUNT instruction instead. If your fancy method to count bits isn't recognised your program goes slower.

The memcpy case is special because C is weird in actually trying to define this as a function, copying memory is so commonly necessary that compilers need to have a pre-baked implementation for the rest of what they do anyway, it's not a surprise that idiom recognition results in your custom memcpy being turned into the built-in memcpy, although in the case you linked that's indeed a bug.

That's due to the run-time and compiler being different projects. In Lisps it's not unusual to see (defun car (x) (car x)).

Arguably, memcpy belongs in libgcc and not the C library. Glibc should not have to do anything in regard to memcpy, other than include the right compiler header in its stdlib.h.

This is still the case with games, pretty much every new big release comes with an associated 'game ready driver' release where the driver can detect and adjust some behaviors to better suit that game.

But that provides a real benefit whereas the compiler trick's objective is to look good in a benchmark.

> Which shows you it's not "AI"; it's more like some hacky code running right inside the phone which detects a plausible moon-like object in the image and fills it with pixels from a canned image.

This is actually a common mode of breakage of standard vision models.

Read the thread. Someone did with photoshopped crater and it enhanced that as well.

It's like saying face recognition didn't recognized when you photoshopped a ear in the photo.

Why would people be scared of a Reddit link?

Same reason I'm scared of TV Tropes [1] links.

[1] https://tvtropes.org/pmwiki/pmwiki.php/Main/DownTheRabbitHol...

Would it not be even easier to prove by simply switching camera app to a non-branded one?

You don't know where in the stack the optimization is taking place.

Next gen: you go to a music concert and take some crappy photos - low light, far from scene. AI recognizes the artist and downloads artist's most recent high-res stock pictures from GettyImages, has a general model of how a scene is setup, lights, lasers, and re-synthesizes your photo into something with the production quality of a movie concert scene.

In a movie it also resamples the clipped audio with the actual song ran though a "live concert" audio filter, adds some nice crowd noises too.

Next-next-gen: using deep fake technology artist voices a song dedication just for you.

I wrote an app 15 years ago that would detect where you’ve been travelling, and get the best photos from Flickr based on the must see location you passed. No longer need to take photos of the Eiffel Tower.

Next-next-next-gen: BigTech corps offers a machine-brain interface so that you can "enhance" your own senses with the help of external processing power. Your reality is about to change, but not without a ton of ads which you cannot escape.

You get a neural interface that lets you learn and process information 100x faster but there's always an ad in the corner of your vision. Do you accept?

Yet psychedelics and weed to also enhance your experience will continue to be federally illegal.

All angles of a concert or parade are captured and instantly synthesized into a time shiftable fully 3D world. Why limit the output to 2D video?

If crappy phone photos are enough to make a decent depth map, I think Stablediffusion can do this today. The only missing piece would be for the band to train a model on high quality photos from specifically their concerts.

ControlNet is really neat

From a technological perspective those are neat. But AI (or any computer program) hallucinating entire experiences that never existed is kind of disturbing even if said experience is actually nice.

Imagine that but with a video...record a concert with poor audio quality of the band and the phone injects the high fidelity song...

The original discussion unfolded here 2 days ago:

"Samsung “space zoom” moon shots are fake, and here is the proof"

https://news.ycombinator.com/item?id=35107601 (375 comments)

And to potentially save you a click, here's the original post:


and here's post from 2021 doing the same tests with the Galaxy S21 Ultra


Yeah I can just about get a decent crop of the moon on my Nikon Z50 with the 250mm lens on it that looks reasonable, if it’s on a tripod. There’s no way this wasn’t going to be some other trickery going on.

What is worrying is why would you buy something with a camera that lies at source. When you need it to tell the truth, how do you know it will be telling it?

> Yeah I can just about get a decent crop of the moon on my Nikon Z50 with the 250mm lens on it that looks reasonable, if it’s on a tripod.

Yeah 300+ is more comfortable for moonshots on APSC sensors, unless you are in -really- good conditions (i.e. perfect sky, far away from artificial lighting.) With 400mm I was able to take a 'pretty dang good' moonshot just outside of my house, even with a little bit of haze that day. It took a good amount of tweaking in DXO to look -great-, but it was worth it.

> What is worrying is why would you buy something with a camera that lies at source. When you need it to tell the truth, how do you know it will be telling it?

This is one of the reasons I still don't care for getting rid of my DSLRs and shoot raw [0] (Besides, PureView was the last phone camera setup I actually enjoyed.)

At the very -least-, these should be settings that have very obvious controls including turning it off by default, and Ideally should be 'opted in.' Otherwise it opens up the possibility that what is happening could count as cheating at certain photo benchmarks.

[0] Okay I also shoot raw to laugh at how awkward some of Sony's E-mount lenses are without lens profile correction applied. Looking at you SEL16F28.

Edit: Context of what you can do at ~550mm equiv. F8, ISO 250, 1/320: https://imgur.com/a/ryoqQB9

I don't know if DSLRs have changed much - but don't they all require post-processing in order to get any type of sharp picture? Out of the camera they aways seemed to be fuzzy as crap until you ran a sharpening filter.

That is - any time you are shooting digital, you don't every look at what the camera saw, but at what some computer interpreted it to be.

I'd be interesting in knowing what that picture of the moon looked like right out of the raw capture.

Unless I misunderstood you, this is a common misconception — with digital cameras there is no "real" or "raw" picture. What the camera captures is just the output of an analog-to-digital converter. How that resembles a picture at all is always a matter of subjective interpretation, there is no ground truth.

RAW files so are as close to what the sensor saw as you might actually get. As opposed to the camera-created JPGs, which can look great but have much less data to work with. Or the JPGs created in-camera using in-camera processing (saturation, contrast, sharpness.

I shoot both, RAW and JPG fine. The latter for quick sharing, with in-camera processing except for action shoots at 9 fps. JPG fine never runs into buffer limits, RAW or JPGs using processing does so. And for birds in flight or sports or similar I don't care that mich about RAW anyway.

Thank you for calling them raw files and not images.

Too many people think of raw files as analogous to slides shot in film cameras. In reality they are digital representations of the analog voltages made by sensors. In order to “see” a raw file it has to be processed which usually means demosaicing and assigning color to values based on what the camera manufacturer has provided. Oh and of course different displays will reproduce colors and brightness levels differently as well.

In reality there isn’t any such thing as an unmediated digital photograph. What is being done by smartphone cameras is a little further down the processing curve but is on the same continuum of “regular” digital photography IMO. This second test shows to me that Samsung is trying to give the photographer as sharp a picture of what they took a picture of.

And analog photographers shouldn’t get but so smug about capturing the “actual” image. Even if you shoot transparency film or collodion wet plate the resultant image is still mediated by the materials. Color reproduction, contrast, and of course the translation from 3d world/object to 2d are still enormous manipulations. Taking a picture will always be transformative compared to the original subject.

So what is considered an “accurate” photograph will always be a philosophical question. IMO I wouldn’t what Samsung has done to be “fake.” A lot of these arguments really sound like DSLR owners being upset that decent pictures can be made with pocket cameras.

I wouldn't call what Samsung does with the moon a photo anymore. A picture? Sure. For being a photograph of the moon so, it way to close to what an AI generator would come up with when asked to create a picture of the moon. Samsung is somewhat using sensor data as prompt it seems.

As far as accurate representation goes, digital photos are court admissable as proof. And forensic ones goes through a deliberate processing pipeline, the original as taken in camera is never discarded while doing so. Which is close enough for me to be a representation of reality.

Personally, my opinion about those over-processed pictures you see on social media is the same as it is about Ssung faking the moon. I don't like neither, regardless if they are smartphone or "real" camera photos.

No they don’t require that. I honestly, and I know this will annoy people, shoot jpeg and it’s sharp out of the camera.

Ah - that is useful to know. I'll admit my knowledge here is dated - the first DSLR I ever picked up, a Canon EOS20D + Sigma 70-300 F4-5.6 yielded incredibly disappointing results out of the camera - but (at least at the time) - it was widely understood that you had to run a sharpening filter.

Presumably the camera does this directly now?

The camera-processed JPEGs already have filters applied, most likely including a sharpening pass.

The fact is that most people are not looking for a phone camera to tell the 'truth', they're simply looking for good images similar to what they're used to seeing from pros with their much more fancy equipment (and remember that most of those photographs are also heavily edited, just by a human instead of AI). The only objective truth that matters much to the average person is stuff like the text or qr code remaining correct.

Most people do not care about their phone camera telling "the truth" because they are operating under the base assumption that cameras do tell the truth, and that their phone camera is honestly Very Good™ at capturing the true image that they're seeing with such a level of detail. That customers are not aware of the level to which professional images are processed is no justification for these features being sneaked in.

If these features were openly presented as some kind of augmented reality or editing it wouldn't be an issue, but as it stands it is no different from, for example, presenting an entertainment television network as if it were actual fact-based news, and justifying it by arguing that viewers are not looking for the truth but for something that makes them feel that they know the truth.

I think it's more difficult than that. People are intimately aware when things are not what they seem. The brain is really good at being suspicious about stuff. If you take a photo and it comes out better than you can see yourself, then questions will be asked. This discontinuity does not sit well.

As for the professionals, a lot of stuff is edited yes. But there is an art to not needing to do any editing worth mentioning. I think there's some intellectual honesty in not painting over any cracks.

A full moon is what, 1/250 sec of a shutter speed? 250 mm on a DX sensor gets you to 370 mm, so yes, 1/250th is a tadvslow to handhold. One of the reasons I like heavier camera bodies, I get a lot less movement blur with those.

Agree on the camer-faking-pictures bit, who wants that?

I shot this at 1/500 iso 400 f6.3 at equiv 375mm: https://imgur.com/ppj8U2Z - no edit other than crop.

I reckon I could have hand held it actually. The lens has pretty decent VR in it. Will wait until there's a full moon again and have another go.

And yeah I want my pictures to look natural.

That does look a bit blurry. Did you shoot with VR from a tripod?

I'm in the middle of London in the UK so that's pollution haze.

So bad? Shit, I never expected that... Over here, you need either fog or cloud cover for the moon to be so hazy...

Yeah it can get quite bag. I'm right near the airport as well which doesn't help.

I guess I'm lucky then, in clear nights max I get is atmospheric haze. Other than that, from the living room window, the moon is crisp sharp. Despite living in a city of 300k odd people...

To play Devil’s Abacus for a moment:

Why do you “need it to tell the truth”? I would bet a large number of the non-expert consumers (99% of users) do not care.

But to defeat this own argument: just make a “AI Enhance” option in the UI…

Do you ever take pictures of your rental car before and after you return it? Because now your rental company (and/or small claims court) will be able to credibly claim it could’ve been subtly modified by on-board AI and thus no longer admissible as evidence.

Besides stuff like this is like pouring gasoline onto conspiracy theory movements like Flat Earthers. Those may be just 1% of the US a population. But you know what isn’t just 1%? Young Earth Creationists, who believe in a similar conspiracy theory of scientists changing the geologic record and maybe astrophysical records as well. These types have large, maybe even disproportionate, political power as well.

You’ve won me over.

Loving the connotations of “Devil’s Abacus”.

Your last point is spot on.

It happens that Samsung is actually doing brilliant job in faking it, even accounting for obstruction like tree branches etc: https://twitter.com/David4252579/status/1634919880217731075

All this could have been OK if Samsung wasn't misleading people into believing that the picture is a result of computational photography and excellent optics.

I would love to have this in my phone's camera, but clearly identified as details added after the fact and might not represent the actual scene. It's done so well that practically no one noticed and people like MKBHD reeved it as a photography achievement.

The truth is, the non-AI implementations also fail to capture how we see the Moon and fixing it with fake details is a welcome improvement as long as it's clear that that's not actual capture.

According to a comment on Reddit this only happens when "Scene Optimizer" is active in the camera. I don't know if this is the default, it probably is nowadays, but my phone camera's equivalent is labeled AI.

I think the public needs to be educated that modern AI basically means to make up new data based on a statistical model from training data, i.e. fake.

In that comment which refers to an article, scene optimiser is described as AI optimising photography parameters like exposure, doesn't mention making up details. If this is Samsung's position, then it's totally not cool.

Scott Hanselman posted today that this is no secret. Samsung have been talking openly about their "Moon Recognition Engine". It's on their website: In Korean, of course. But there's an automated translation of that too.



I'm not really "nail in the coffin" convinced.

Is there a process that does something akin to identifying the moon, and selecting a special filter for that? I'm pretty convinced of this. (Both by the evidence and also it's apparently advertised as such?)

What is that filter? I don't know, and the answer to this would inform "how fake" I think these images are.

It does seem to be a moon-specific treatment. It doesn't seem to be pasting a JPG of the moon and it doesn't seem to be generating the moon.

How fake it is, is, I think a measure of how dependent the result is on the specific sensor data, and how localized the changes are. If there's a moon-specific filter that's excellent at filling in data "between" sensed pixels without looking at any pixels far away (beyond, perhaps, the initial categorization as a photo of the moon where this filter would be good), that's not very fake to me.

If there was suddenly a new structure on the moon visible to the sensors, I'd want it reflected in the photo. I would consider the process very fake if it couldn't do that at all, and not very fake if it could do it reliably.

I’m just frustrated that there isn’t even basic thought from Samsung’s end. It could at the very least pull location data, check to see if the moon is above the horizon for the location, check to see if the phase of the moon matches what it sees and THEN apply the filter.

Do we know what it's doing exactly? It can be just doing some AI enhancement during post-processing. In this case the moon is not literally hardcoded, but a part of some opaque machine learning model.

I don't know why this saga is still ongoing. It is a known fact that Samsung enhances the quality of known objects in images with AI. They literally advertise this.

There are aftermarket photography apps. Has anybody tried taking their Samsung S23 and photographing the moon /w one of those? I wonder if the "AI assistant" is limited to Samsung's own bundled camera app, or if it's always on & modifies pictures taken via other apps.

You can just turn off "scene optimizer" in the stock app.

If you didn't go through the comments, this one is interesting imo:

someone took a handheld picture of the moon with S21 using GCAM instead of the default app and the results are also quite awesome:


I am not convinced this means they are fake. I'd expect any image enhancement using good AI to depend on the surrounding areas. This could plausibly cause a blank square to be filled in when the surrounding areas have data, while a blank square on its own is not.

I wonder what kind of implications this will have for court cases.

There will have to be expert witnesses to discuss any image that is brought in, if it's real or not.

When does computational photography count as "fake" or "AI generated"?

I don't see the line.

I'm still skeptical. He blurred it and then cropped it. The sharp edge from the cropping is going to mess up the deconvolution optimization by causing any and all deconvolution to strongly reduce sharpness along that edge. If I'm right, the Samsung would restore both halves equally well if the blur was applied after the cropping rather than before.

I still want to see this whole thing done without Gaussian blur, which is readily reversible by even primitive algorithms.

> which is readily reversible by even primitive algorithms.

Not true unless you know beforehand the exact kernel. Besides, there's 0 reason why a camera software be programmed to, or attempt to revert Gaussian blur.

And finally, he downsampled the image, and you can't recover from that.

Isn't a gaussian blur a pretty standard blur done by a poorly focused camera? Perhaps Samsung went to great lengths to reverse a Gaussian blur because it happens all the time?

Gaussian blur and the blur from out-of-focus lenses are a different kind of blur: https://en.wikipedia.org/wiki/Defocus_aberration https://en.wikipedia.org/wiki/Gaussian_blur

No, a real camera would give you a box blur, because the out-of-focus light rays are distributed uniformly across the sensor.

Gaussian blur is indeed super common in nature, any sum of noise will be Gaussian in the limit.

Perfect deconvolution of a gaussian blur is only possible in theory, or in ideal cases where you control the signal end to end, like a programming assignment. In reality there's plenty of sources of errors, like discretization noise, lens blur, sensor noise, and SNR is instrumental to the result. I've never seen a blurry image get reconstructed using deconvolution without artifacts.

Then there's the fact that a photo enhancement program has no reason to even try deconvolving gaussian blur, they should use a more appropriate kernel that better describes out of focus blur.

His original blurred moon image is available in his first post. Without even taking the detour of photographing it from a monitor, try using his image and any deconvolution software you want, and try getting anywhere near his result.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact