Hacker News new | past | comments | ask | show | jobs | submit login
Samsung's response on 'moonshot' controversy (samsungmobilepress.com)
80 points by achow on March 15, 2023 | hide | past | favorite | 106 comments



After reading a lot of ChatGPT-like marketing cruft, once concludes that they deny nothing here. If you take a photo with the image of the unobscured Moon in it on a Samsung camera, the Moon will be replaced by a generated image of the Moon created in-camera. The details of how this is done aren't really important if you are wondering if the photons reflected from the Moon that hit the sensor are represented in the image file that ultimately is output.


Here's a kind of barometer I came up with: imagine taking a photo of the moon, and the phone simply outright replaces the photo with a downloaded pristine astrophotography shot from a cloud server, and shows it to the user instead.

How close a phones actual behaviour is to that dystopia, versus the other extreme of simply dumping the raw pixels from the sensor on the screen, that's how messed up that phone is compared to others on the market.

Right now, Samsung appears to be way up the bad end of such a scale.

At least for the moon... that is, the one situation we know about because someone jumped through hoops to discover it.


What happens to photo attribution, credit, copyright, etc in these situations?

If I take a picture of the moon with my astrophotography setup, I can release that with whatever licensing I choose. If someone takes a photo with their Samsung phone, and it replaces that image with a stock photo that has been very permissibly licensed, they would not be able to do anything with that image that does not fall in line with the license. But who's being told what the license is?

It's one thing if you come up with an app where you take a picture of something on your mobile, but it returns the exact same thing photographed by a professional with much higher quality than mobile, that would be one thing as everyone would understand the situation. This is pretending you have a better camera than you have and selling it as a better camera even though your camera isn't really doing the work.


I'm looking forward to a bright future when I take a selfie, and get replaced with a cloud-server-supplied shot of a smarter, handsomer, better dressed, not-graying-in-the-hair version of myself.


Also your prettified self is in an exotic location doing cooler stuff than you really are.


Great, so I have to pay for their exotic lifestyle as well? Where does it end??


Well, your friends will all be doing the same thing, so you'll need AI tools to de-falsify their selfies. Eventually, it will even learn what you would have liked/favorited/retweeted and just do it for you. From that point your social media presence is a self-licking ice cream cone and you can just ignore it and hang out in the Discord/IRC/group text.


To an extent filters are already doing this... and people love it.


I don't get that from what they state.

Quote: "The engine for recognizing the moon was built based on a variety of moon shapes and details, from full through to crescent moons, and is based on images taken from our view from the Earth."

This says the engine was built from that training, and later on they state that the engine is applied to multiple pictures your camera takes. Where specifically in the article does it say that images not from your camera are applied to your final image? An engine is a process, not necessarily a set of images applied to your images.

It is a processing method, the mechanics of which were ironed out processing other images.


Apple does this kind of retouching too as I understand it, but only for bokeh effects and it’s not something they hide.


Apple does "this kind of" retouching, slapping a professionally shot sticker of the thing you phone-camera'd on top of your photograph so it looks pro?


They do nearly the same thing, as I understand it. Samsung doesn’t specifically improve the moon, it’s just one of the things their filter has learned to improve. Apple is doing pretty much the same thing except getting away with it; it’s not only their bokeh as I understand it.


I feel like this is signature Apple, and most people who know much about digital photography (especially in phones) is aware that Apple has fairly distinct processing techniques – not necessarily distinct optics.

What Samsung did was relatively covert in comparison.


It’s a specific touch up photography mode.

If you draw your own moon with new craters in different spots, it’ll touch up your fake moon, so it’s generating new texture, not replacing the moon with a preexisting bitmap.

If you don’t like it, don’t shoot pictures in the artificially enhanced scenic optimizer mode. It has an on off button…

There’s no problem here expect corporate transparency and shoddy journalism headlines.


Even if you take a picture that isn't of the moon, it might be replaced by a generated image of the Moon. The original proof that demonstrated this nonsense was happening wasn't photographing the moon: ce n'est pas une pipe, AI!


Ten years ago we got fun and exciting errors in scanned documents.[0] I'm looking forward to ten years from now, when I can read about a conviction being overturned because an AI-enabled camera did a similar "enhancement".

[0] https://www.bbc.com/news/technology-23588202


Something like this actually played a significant role in the trial of Kyle Rittenhouse. The prosecution tried to enter some low resolution images into the record and his defense argued it shouldn't be admissable because it had been upscaled. They asserted that by upscaling it, pixels had been added to the image which may not reflect what the camera actually captured. IIRC, the image wasn't very detailed and it was pretty difficult to tell where his rifle was pointed.

I don't know whether either side's argument was right, I'm only saying it happened.


> They asserted that by upscaling it, pixels had been added to the image which may not reflect what the camera actually captured.

I think it's undeniable that pixels were added; that video looks like it's lower resolution than an iPad's display. There have to be pixels added to make it even full-screen, much less zoom in.

I think the underlying question was whether pinch to zoom does nearest-neighbor upscaling (i.e. each pixel becomes a 2x2 block of the same color), or whether it does something more clever to "guess" at what those pixels would have looked like (something like averaging the pixels nearest to it).

The former seems fine to me. It preserves the original image, and zooming in really only gives you a blockier representation of the original. The latter is inventing things the camera didn't capture, and the method of interpolation could substantially alter the resulting image. Wikipedia has a pretty good example of both next to each other: https://en.wikipedia.org/wiki/Image_scaling


Yeah it was actually a very interesting argument to listen to in court. Would recommend anyone interested should go listen ti the trial


How hard were they fishing for a mistrial at this point?



Yeah, you can't even see the man on the moon yet!

To be fair once I get my Bright Side of the Moon Lunar advertising corp set up distributing images on the moon, we'll be in a great position to sue Samsung for failing to reproduce my advertising with full fidelity.


> However, the moon will not be properly recognized by the camera and Scene Optimizer technology if it is obscured by clouds, or if the moon object itself is the side that is not visible from the Earth.

Great, if you're planning a moon-orbiting mission, you now know not to rely on your Samsung Galaxy to give super-resolution when taking images of the far side. Good to know!


There is a list of people that this effects. They probably all have iPhones though.


They should not be allowed to lie and say their technology "enhances details". This is not a TV show and there are no details to be enhanced.


Enhancing is the correct term here, at least, in a technical context it is.

Image enhancement is defined as a process which aims to improve bad images so that they "look" better. Enhancement aims to make it look better to a human, and often but not always takes into account knowledge of the human visual system to get better looking results.

Image restoration is a defined as a process which aims to invert known degradation operations applied to image, such as pepper noise, scratches, or missing fragments of old photographs.

It's surprisingly to me how strongly you object to the word enhancement even though image processing textbooks are clear about the definition and have been using the word for decades. I think it shows some ignorance about how cameras and image processing works in modern devices.


"Computer, please show me your best guess."


It not be out of the ordinary for nobody to understand that what it is doing is impossible except for a few souls in the engineering department, who only agreed to implement it after repeated assurances from someone 8 steps removed from anything to do with communication, that the truth would be clearly communicated. I don't think anyone involved is lying, what happens in big corporations is that each of one hundred people distorts the truth by 1%, for a total distortion of 100%.


At some point I don't find it too outrageous that someone said "the moon rarely changes, we can beat Apple at astrophotography by cheating."


Of course there are details to be enhanced. The moon has craters. The craters are details. If a bad picture of the moon shows blurry craters, making the craters sharp enhances the details of the moon. I don't see where the lie is.


There could be a new crater by an asteroid that impacted the moon this morning. The crater is big, but not quite big enough to be visible on the blurred "real" image you shot this afternoon. After Samsung's alogrithm enhances the picture it has a level of detail in which the crater /should/ be visible, but since that enhancement is based on older images, the cater remains invisible.

This situation may seem contrived, but it is actually quite common that people disagree about details that were present at a certain event and try to resolve the disagreement by referring to photos. Now photos can nolonger be trusted as arbiters.


It doesn't even have to be a new crater. The moon wobbles throughout the month so that the part that faces the earth is slightly different through time. Combine with how close the moon is to the earth (which varies) and the amount that is lit up, each day's picture of the moon is fairly unique (at least unique enough over a largish data set).

I would hope that their enhancement software pulls the current timestamp and synthesizes a picture that would be the same as what would be taken from a real high resolution / telescopic image at that time and place.


Because it's not getting those details by looking at the moon right now, it's getting them based on pictures people have previously taken of the moon.


That doesn't make it a lie to say that it "enhances details". You are discontent with the method used.


One method uses the photo you actually took as the base and enhances from that. The other method used a random old picture of the moon from elsewhere and copypastes details of that onto your photos.

The distinction is significant, as for one of them your photo is the sole source of truth, while for the other it just "inserts" the image from elsewhere into your own photo. The former is expected, but the latter is not.

It's the same as the difference between enhancing a photo of yourself by doing some color/light processing or upscaling/sharpening using AI vs. "enhancing" by getting Brad Pitt's eyes and Angelina Jolie's nose copypasted onto a photo of your own face.


I'd argue that "enhancing details" is not the same as "replacing details".

Modifying the existing pixels captured by the sensor isn't the same as replacing an AI recognized section of the original pixels with other pixels entirely.

I think other interpretations of "enhance" aren't wrong either, since "enhance" is pretty subjective in the first place.


If there were suddenly a new crater on the moon whose photons were reaching the camera, and the AI algorithm decided that it didn't exist (and removed it) because older pictures of the moon didn't include it, I'd contest that the photo wasn't "enhanced."


> Samsung continues to improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon.

That bit is a step in the right direction, but I would've preferred the overall tone and message to be one of humility. For example, that they're committing to "honest images" by default, and making some of this more questionable image enhancement to be optional modes/tools that can be enabled consciously.

Samsung used to be a low-end brand for computer peripherals. I love the quality and technical innovation of some of their more recent products. I want to see them be one of the most respected and trustworthy brands. Embracing honesty and integrity during this surge of AI methods being shoved into products seems important to that goal.


I just want a damn on/off switch for the image processing. Is that so much to ask for?


From the webpage:

> If users wish to take a picture without the support of AI, users can easily deactivate Scene Optimizer by heading to: Camera → Camera Settings → Scene Optimizer → Off

Does that fulfill your requirements or did you mean something different?


I missed that, thanks. I wish other phones followed suit with their image processing.


Seems like a very complicated way of saying replace blurry moon image with high resolution moon image stored in the cloud.


It's not in the cloud, this is all done on-device. They have a moon detection AI model that they pass the image through, the output of which is then used to set up and then run the moon enhancement model, which fills in the detail from moon images it's been trained upon.

These are part of a larger package of super-resolution models and processing pipeline that Samsung licenses from ArcSoft, a computational photography software company that specialises in mobile devices.


>... Samsung licenses from ArcSoft ...

Interesting, though there seems to be no reference to this kind of function in Arcsoft site:

https://www.arcsoft.com/product/single-camera-solutions-on-s...

https://www.arcsoft.com/product/dual-camera-solutions-on-sma...

What they mention seems more "logical" enhancements, though the selfie feature of gender and age detection :

https://www.arcsoft.com/product/single-camera-solutions-on-s...

is somehow creepy.

Anyway, if they can recognize gender and age and what do you have in your refrigerator ("This offers users a smoother and smarter refrigerator experience.") :

https://www.arcsoft.com/product/smart-refrigerator-solutions...

surely they can recognize the moon.

Now that they made me think about it, my refrigerator experience, while usually smooth enough, is pretty dumb and repetitive.


On Samsung devices that support this, it's implemented in the file libsuperresolution_raw.arcsoft.so in /system/lib64, if you're curious to have a look at how it works.

Some strings from that file, relating to the moon detection and enhancement process:

    AHDR_SetMoonMask
    ai_moon
    ArcSoft_Moon_Detection_2.1.125231001.2
    ArcSoft_MoonEnhancement_1.1.12021.257
    ARC_SRR_DisableDaytimeMoonScene
    ARC_SRR_GetMoonSceneType
    ARC_SRR_SetMoonWeight
    bMoonMode
    bMoonProtect
    MFSR Moon Detect
    moon count is %d
    MoonDetect
    MoonDetect_Init
    %s_%02d_AIE_Output_ISO_%d_DRC_%0.2f_HDRMoon_%d_%dx%d
    %s_%02d_CropMoonMask_Idx_%d_%dx%d
    %s_%02d_DaynightAIMoon_Output_Idx_%d_%dx%d
    %s_%02d_DaytimeAIMoon_Output_Idx_%d_%dx%d
    %s_%02d_MoonDetection_Input_%dx%d
    %s_%02d_MoonHDR_Output_ISO_%d_DRC_%0.2f_HDR_%d_Moon_%d_%dx%d
    %s_%02d_MoonMask_Idx_%d_%dx%d
    %s_%02d_ResizedMoonMask_Idx_%d_%dx%d
    /sdcard/download/arcsr/%s_moon_mask_%dx%d.gray
    %s/FusionRun_pMoonmask_%dx%d.gray
    void mfsr_moon_detect(gk_mfsr *)


    moon count is %d
I am extremely tickled that they planned ahead for multiple moons.


I suspect that if the AI detects two moons, it probably abandons "enhancement" and drops a message in the logs (or renders an error, though I doubt that's what that particular string is for).


Now that you provided a good reference keyword (Arcsoft), it seems that it all already happened for the Galaxy S21 a couple years ago:

https://www.inverse.com/input/reviews/is-samsung-galaxy-s21-...


I think that’s not really how super resolution works, it’s closer to how diffusion models hallucinate details inspired by their training set.

I roughly think about it being like you have a lot of high res moons lossy compressed into model weights until they are basically distilled into some abstract sense of “moon image”-ness “indexed” by blurry image patches. Running the network then “unzips” some of that detail over matching low resolution patches.

(Using quotes to denote terms I am heavily abusing in this loose analogy)

Edit: Importantly though I think this isn’t that different from what you are describing in terms of whether this is potentially misleading customers about the optics on their device, because it is inserting detail not actually captured by the camera sensor.


Super Resolution is only part of the process. It says they then apply "Scene Optimizer’s deep-learning-based AI detail enhancement engine".

If the model and its weights contain detail not in the photo being taken, then it's tantamount to having high res images of the moon stored on camera and composited into the image. And if it doesn't then it's not the moon being displayed.

Not that it's necessarily bad, but it could be if it fools someone into thinking they're buying superior electro/optics. Enough such that it warranted the line "Samsung continues to improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon".


I think it's actually worse than compositing in a high-resolution moon image. With AI enhancement, the details will look believable but may be completely inaccurate.


> If the model and its weights contain detail not in the photo being taken, then it's tantamount to having high res images of the moon stored on camera and composited into the image.

This is what is happening. I agree it’s tricking people into thinking it’s all optics and that’s kinda bad.


The "it finds an image in its database" understanding of diffusion models doesn't work for SD because it can interpolate between prompts, but since there is only one moon, and it doesn't rotate, and their system doesn't work when the moon is partially obscured, there is really no need to describe it as anything more complex than that.


If you want a picture of the Moon in cloud, a nice cold frontal weather system should suffice!


No no, it's Samsung's amazing deep-learning-based AI detail enhancement engine and OIS and VDIS technologies. /s

What I'm wondering is what else is being swapped out.


You mean like wrinkles in basically all today's phones (at least for selfies camera, very aggressively) for silky smooth skin or completely changing skin tone that make even ghouls look OK for a nice instagram stream?

I properly don't get all the outrage, ironing faces, changing colors and removing moles is a nice celebrated feature and even created whole 'Apple skin' thingie, but adding well known static details to blurry image of the moon is somehow suddenly crossing the line? That line has been crossed long time ago my friends, look at optic physics of those tiny sensors and crappy lenses and results 'ya all want from them. People mentioned yesterday in main thread that Apple latest switched side pic of bunny in the grass for his other side, which is even more hilarious 'painting rather than photography' case.

Plus how things are exactly done in phone is known to maybe 10 engineers most probably in Korea, but people here quickly jumped on outrage wagon from 1 very lightly and non-scientifically done test on reddit (I tried to repeat his experiment with S22 ultra but failed 100% of the time, with his own images, it was just blurry mess 100% of the cases).

I have S22 ultra and its making properly good completely handheld night photos in total darkness (just stars, but no moon and no artificial light). I mean me standing in dark forest 11pm, snapping nice detailed foliage and starry pics, where my very eyes see very little from the scene. It very well surpasses my fullframe Nikon with superb lense in this. But its true that is done by main, big sensor, and not 10x zoom one, which is then zoomed extra say 3-4x digitally to get shot of moon which spans whole picture.


What do you mean, what else? I'm genuinely curious what kind of nefariousness you imagine is going on? Like,they're swapping images of Fords with Hyundais? Swapping images of catenaries with nigh indistinguishable parabolas?


This looks to be specifically optimized for the Moon, but they are very close to a generic algorithm that they can feed blurry cell-phone images of common photography subjects on the input, match them to high-resolution known-good stock photos. Those stock photos may be taken by pros using studio lighting and full-frame DLSRs with lenses containing more glass than the mass of the entire cell phone. Regardless of whether you're copy-pasting in a rectangular bitmap from a stored stock photo or whether your convolutional neural net applies stock-photo-ness to input data, the effect is the same: you can generate photos that look really astonishingly good given that the input optics stack has a 5mm deep total track and your sensor has less than 10mm across the diagonal.

Once you have that general framework, though, I'd expect the Samsung to want to 'enhance' Samsung phones and smartwatches (especially), Coke cans, candles, and cars, butterflies and birds, the Moon or a Macbook, rainbows or diamond rings, and anything else that's both hard to take a good picture of and also a common photography subject. Speaking of Coke cans, that gives my pessimistic, dystopian-leaning imagination an idea: I wonder if Coke could or other brands might in future develop an exclusive partnership with Samsung to send stock photos of their product, resulting in photos of your meal where the familiar red can looks especially glossy, saturated, reflective, crisp, and refreshing...while an Instagram of your meal from the food truck that has cans of Pepsi still looks like a cell phone photo.

Upscaling photos of printed pages by matching against a font library seems like a similar opportunity, though you run into the old Xerox copier issue that the transformation removes information about uncertainty. I'm sure they'd love to have pictures of people look better too, but it's more difficult when there are billions of subjects to ID.

I'm not saying that they should or should not do any of this, but when you want to sell phones with cameras that make your shots look really good, this kind of technique lets you cheat the laws of physics.


Sorry for being late in reading your response, but yes I agree that those dystopian scenarios are not outside of the realm of possibility. As I said before, what makes the moon unique is that every photo of the full moon is essentially the same image from the same angle, whereas a photo of a Coke can is going to be from all different angles, sides, lighting conditions, even sizes, and thus much harder to convincingly "enhance." But I have no doubt that Samsung Research is up to the task of figuring out even more ways to monetize our eyeballs, so your point is well taken.


What if they swap some details of an eye or a nose and make one person look just enough like another that an innocent person is convicted?


Given that people still get convicted based on eyewitness testimony that has been repeatedly shown to be extremely inaccurate, I hold out little hope that the justice system will figure out how unreliable modern photography is anytime soon.


You think there aren't processes in place for this sort of thing? A due process even.

IT experts are brought into cases all the time for stuff like this. This isn't even the standard camera "lying", it's explicitly a special tab for taking super-zoomed in AI optimized photos.


And most of the "Experts" brought into courtrooms will argue whatever you pay them to argue. It's an industry. Lawyers aren't picking random AI engineers out of a mom and pop tech company, they have dedicated people who as a job sell their professional title to lawyers to launder a narrative through "an expert"


Where does one draw the line between "improving" an image and materially changing an image?

If this was not a moon-enhancer, but just a sharpener – which we have used on images for decades – would there be any fuss? Probably not. If this was an AI model trained to do better sharpening than the algorithms we've used historically, would there be any fuss? Maybe, but probably less than with this.

Short of literally superimposing fixed images of the moon on top of people's photos, which this is not doing, isn't this just a natural progression of the same sorts of image enhancements we've been doing previously?

Perhaps the issue is in the ML optimisation function? An image sharpener is optimising for a supposedly generic change, whereas this model is optimising for a particular subject that carries semantic meaning for humans. Similarly, an enhancer that improves contrast and sharpening on human faces might be fine, but a model that uses your other photos of your friends and family to improve their faces in new photos may not be, because it has the potential to change meaning if it gets it wrong?


> Short of literally superimposing fixed images of the moon on top of people's photos, which this is not doing,

it is 95% doing exactly that. taking your blurry moon, putting what it knows about the real moon on top of it, and doing an incredibly precise photoshop job to merge them together. it's using real world photos in a hardcoded database to substitute information directly into your photos, just in a very sophisticated way.

it's completely gross and I would want no part of this on my camera.


This is already a question in astronomy. At least in astrophotography (think Instagram accounts or computer backgrounds) there are virtually no images that don't involve combining individual frames and programmatically combining the best x% to enhance the image (increase saturation, sharpness, etc.) and sometimes color is artificially added in.

Seems like whats missed in this is a lot of people not understanding that "regular" pictures of the moon from astrophotographers/ NASA are actually big composites of multiple images that have gone through pretty intense processing - next to none of it is done with single-frame data, and the fact phones can mimic this at any level is pretty neat imo.


the difference is that these processes, i.e. stacking multiple images, are meant to find out as much signal as possible from a noisy set and if they bring extra signal they are all declared in the sources.


It’s worth pointing out that there can be a distinction between the type of information in signals and the type of information in filters. I think people feel disturbed because they think the filters here are overfitted, therefore are just parroting out memorized patterns of the moon. But if it were a filter that generalized to everything, people would just say “wow the camera sensor is amazing”.

The irony is that the backlash here will just guarantee that they put more effort into making this generalize better, so it’s received as less gimmicky. Whether that’s just putting an enhancement neural network in optical form at the ccd level or something else is anyone’s guess. People say they don’t want detail that’s not there, but to some extent the amount of detail is always being infused with assumptions and heuristics at every level of the imaging pipeline.


"isn't this just a natural progression of the same sorts of image enhancements we've been doing previously?"

No. Every other technique is improving on raw data using that raw data and knowledge about ways to process data.

This is adding data, not just processing it.


But what's the difference?

Sharpening algorithms might be statically defined rather than a bunch of weights, but they take a blurred image and create new data in it through approximations and heuristics defined in the algorithm.

I agree that it feels like there should be a difference, but I can't pin down what that actually is.


> create new data in it through approximations and heuristics defined in the algorithm

I guess the difference is where the algorithm gets it's input data from. Just sensor data or does it draw from a neural network that memorized a bunch of images / data store of images too.

You can argue what's the difference between this and Huawei replacing the image of the moon when their phone detects it with a hi-res one it has in storage - it's an "algorithm" too.

Are you using just the data provided to draw conclusions? Or do you include extra data from elsewhere to get your conclusions?


It begs the question of why we don't apply the same scrutiny to astronomers and cosmologists, who use the same technique on a much larger scale. It's not like anyone "took a picture of a Black Hole," and yet there were hundreds of newspaper headlines suggesting that's exactly what happened. Virtually every "photo" produced of "space" is created through an "imaging" process, which in recent years frequently involves fairly intense processing steps using ML algorithms that are subject to errors and biases which could substantially alter the end result. But the scientific community, and especially the media, largely takes these "images" at face value, despite them having little basis in any "actual" snapshot of reality. I don't really see how what Samsung is doing is any different.


Astronomers aren't lying to us to sell smartphones based on the quality of advertised images.


lol it looks like we posted similar things at basically the same time - I'd also maybe add that it's noteworthy that this isn't what Huawei was doing where they just overlayed a stock photo of the moon. What Samsung is doing is enhancing the actual data your phone's sensor is getting, which makes it much more like astrophotography processing than just some gimmick/hack.


Yeah I noticed that, lol! We even used the same verbiage. You must be a really smart person ;)


Doing any form of object detection whatsoever as part of image processing is too much imo. Any form of AI as well.


Recent and related:

Samsung caught faking zoom photos of the Moon - https://news.ycombinator.com/item?id=35136167 - March 2023 (27 comments)

Update to the “Samsung space zoom moon shots are fake” - https://news.ycombinator.com/item?id=35123389 - March 2023 (122 comments)

Samsung “space zoom” moon shots are fake, and here is the proof - https://news.ycombinator.com/item?id=35107601 - March 2023 (386 comments)


I wonder if Samsung phones do the same "upscaling" on popular landmarks, the Eiffel tower, Grand Canyon, etc. I'd give it better than even odds


Think of the bright side: Billions of crappy photos from the same things all over the world will contain a lot of the same information, and AI (of course!) guided partial image deduplication will save us many data centers.


"However, the moon will not be properly recognized by the camera and Scene Optimizer technology if it is obscured by clouds, or if the moon object itself is the side that is not visible from the Earth."

Do not choose the Samsung for your travel photos of the dark side of the moon.


Now I'm imagining an astronaut snapping a photo with their phone while in Lunar orbit, and when they look at the image the Moon's replaced with the broken-image icon from Netscape.


I snorted when reading that, how many cell phone images of the back of the moon exist?


None, yet. But a spaceship full of instagrammers is on its way.

https://www.space.com/dearmoon-announces-moon-crew-spacex-st...


Reading that article gave me the "Just sending a bunch of random people up in a rocket to go around the moon, what could possibly go wrong?" vibes.


> Samsung continues to improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon.

What is this even supposed to mean? Can somebody explain the difference between "a picture of the real moon" and "an image of the moon"? I would take them to mean the same thing, right?

Samsung certainly isn't reducing my confusion with that statement. I guess I'll just stick with my Motorola.


between the act of taking a picture of (the real moon) and [a picture of] an image of the moon.


What's the difference between photographing the real moon and photographing an image of the moon?

You might say that the real moon is infinitely far away and has a straight line of sight. Okay, what about the moon through periscope mirrors? The moon through binocular lenses (in front of the phone camera)? What about the real moon next to the moon reflected off a window? What about the moon projected through a telescope onto a piece of paper? Which one of these "moons" should be subject to image "enhancement" and which should not?

You might say the real moon has a certain angular size and brightness... but again, lenses, filters, cloud cover, distances to artificial objects, brightness of artificial objects.

I don't think there is any realistic way to distinguish the real moon from a picture of the moon.


The difference is they are going to try and detect it somehow and not optimize fake moons.


This is a total non-answer. Nothing but technojargon and babble for baffling the average consumer. They fall short of admitting that yes, they effectively replace images with "enhancements" that are sourced from their ML models in some cases - they prefer to talk about how they merge multiple frames - taken by you - with AI Magic (tm).


I got the exact same interpretation of the article and really dislike what they wrote.

They start off by trumpeting their wondrous AI image enhancement technology. They make just a passing reference to the deliberately blurred moon image without explaining how their enhancement technology is wholly inappropriate for "enhancing" an image with no detail to begin with. They continue bragging about AI, object recognition, image stabilization, multi-frame super-resolution and denoising - without addressing the elephant in the room, which is replacing blurry objects with pre-loaded stock imagery. It keeps reiterating that what it's doing is "detail enhancement" (I counted 8 times), not "pasting a stock photo".

What an utterly disingenuous press release. It's nothing but a red herring meant to distract and divert.

> Samsung continues to improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon.

So... they end with a line that basically says "we aim to lie more selectively"?


Things like this feature should be opt-in or basically all pictures made with those smartphone will be legally unusable.


They are opt out. Because people want pictures that look as something. Here are examples of raw pictures from low pixel count sensors and what processing is necessary to get usable picture. https://www.strollswithmydog.com/raw-file-conversion-steps/ https://petapixel.com/2019/07/15/what-does-an-unprocessed-ra...


A solution would be to always store the original unenhanced image, like apple does for images in iCloud.


samsung: "someone is claiming we are enhancing photos with hardcoded data taken from other photos, let us respond to this claim. We are Enhancing your Photos with Hardcoded Data taken from Other Photos."

cant wait til it puts someone elses dog or kid there


If there was a change to the surface of the moon that was visible to the naked eye, would this camera's software filter it out? Because that is where I draw the line. Imagine seeing a large asteroid impact, taking a photo, and getting gaslit.


Early on in the 'controversy' someone on Reddit undertook a similar test, by photoshopping an image of the moon to have a different crater pattern, blurring it, and taking an enhanced photo. Samsung 'enhanced' the moon with the incorrect crater pattern.

After seeing that, I was pretty skeptical about the "they're just swapping in a high res png!" claims, regardless of how much they got repeated. This post is more evidence they're not just swapping in a higher res image, but I hunch people will keep repeating it anyway.


That the enhanced photo contains the wrong data would seem to pretty clearly prove that it is not enhancing the data from the camera sensor and instead using pre-existing photos as the data source.


To clarify:

1. User took a photo of the "real" moon, and manipulated the craters in photoshop. 2. User photographed a low-resolution image of the manipulated moon image with the Samsung phone. 3. The "enhanced" photo included the manipulated, not-real craters in greater detail.

It seems to me that indicates the camera app is utilizing data from the camera sensor to some extent, not just using pre-existing photos, because it "enhanced" craters that do not exist in any pre-existing photos. Why would it indicate the opposite?


A solution would be to always store the original unenhanced image, like apple does for images in iCloud.


A solution would be to always store the original unenhanced image, like apple does for images in iCloud.


This page seems designed to direct attention anywhere but the meat of the issue: are the moon details in Samsung photos derived from sensor data alone, or also external photos? To me, the answer is clear: Samsung can’t produce this result unless they also ingest external, high res moon photos. Yet Samsung frames it as “enhancing details”, not “adding details”, clearly implying the details in question were already there in some form.


This is the most longwinded version of "yes, we did it, and we're not sorry" in recent memory


I do this too. I meet someone who expresses an opinion relevant to the liberal/conservative divide. Based on which side it better matches, I fill in the rest of their opinions for them.

I feel bad for doing it, but it seems to work astonishingly well. Had I not told you, you probably wouldn't have been able to tell.

I guess I just meet a lot of people with moon-shaped opinions.


You should feel bad for doing it, because it only contributes to the growing divide between people. There's a fairly large cohort of people with opinions that straddle both sides of the aisle from whom you couldn't fill in most of their opinions based on just a single opinion.

reddit.com/r/2ALiberals comes to mind.


In other news Samsung 990 pro ssd’s are failing in days. No relation.


You should upgrade to the 999 pro model which will transparently provide data one could imagine being there. There is also talk of a firmware hack unlocking the feature not just for damaged bits but for the whole disk address space. It is believed this feature is still under test and will be commercially available later this year.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: