Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Samsung's AI photo feature adds creepy teeth to baby photos (boingboing.net)
184 points by CharlesW on March 30, 2023 | hide | past | favorite | 166 comments



Looking at the relevant tweet:

https://twitter.com/earcity/status/1638582541706829824

It's not just adding teeth it's actually changing the facial features which seems completely insane to me. Is it just me? Like this is a terrible application of this technology.


And this after they were caught “faking” photos of the moon:

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...

To be clear, they’re adding detail that the camera can’t see. What they’re doing is more sophisticated than substituting a reference moon.png in place of your shot, but it’s adding Samsung-provided detail if it detects a moon either way.

So if a new crater appeared on the moon tomorrow, their camera would need a software update to capture it.


> a new crater appeared on the moon tomorrow, their camera would need a software update to capture it.

Source? Afaik it was using ANNs to upscale, not dropping in previously taken images.


You cannot upscale something that doesn't exist in the image data. The moon image test involved a massively downsized and blurred image. The detailed information about craters was simply destroyed and cannot be recovered, simple as that. On top they also tested it with a halved copy of the moon in the same picture and only the full moon in the picture got more detailed as the AI didn't recognize the halved moon.

Samsung made use of the fact that the moon always looks the same from our perspective as it's tidally locked, meaning image "improvement" algorithms can more or less copy-paste moon surface details into the image (yes, of course it's more complicated than drawing moon.jpg over the picture, but the end result is the same).


People added additional craters and as long as it was recognized as moon also the added craters were enhanced.


This above comment is a great example about the limitations of language when dealing with image manipulation and generative image enhancement.

“People added additional craters” … no they didn’t. Without knowing anything about what people interested in stress-testing Samsung’s software did, I can say that because if they did really add additional craters, it’d be front-page news.

I assume what people did was added artifacts which are not real (let’s call it ‘fictional’) to an image. Samsung’s software enhanced it as long as these fictional artifacts conformed to its notion of what a blurred crater looks like.

So it turned fictional artifacts into higher-res fictional artifacts. At this point, you might as well generate the moon image in a terrain generator like World Machine.

Put it another way, if Coca-cola tomorrow puts a giant billboard that says “Drink Coke” on the moon, and it turns out Samsung’s software didn’t expect that, it’ll need a software update.

More realistically, if tomorrow you have a lunar impact crater with a shape distinctive enough that Samsung’s software doesn’t expect, again, you’ll need a software update.

And that’s the heart of the issue: the camera is no longer capturing reality, but instead injecting detail that someone at Samsung decided would be appropriate for your shot.


The Reddit user tested at one point by completely removing/whiting-out areas of the image, to remove any possible detail to upscale. And it still manages to create the same sort high detail upscaled image.


> And this after they were caught “faking” photos of the moon:

This genuinely isn't informative evidence of how the process works. You're not seeing the raw samples from the camera, which are going to be very noisy even if you're showing it a white circle. "It's actually the moon" may be a reasonable belief given the noise level.


These features have been on Asian made phones for a while. Sometimes you'll see a "beautify" filter which will feature things like "lighten skin", "make eyes rounder" or "thin face".

I know how strangely offensive this all sounds but it's completely factual - there's some lost in translation thing that is I'm assuming benign and normal in some other culture which comes across as pretty inappropriate and even a bit racist to Americans.


Particularly common with selfies, because most people see photos of themselves and think it looks horrible (they're used to seeing themselves mirrored!) and want to "touch up" the picture.

Incidentally on a tangential note, this is why most front facing cameras on phones, tablets, and laptops mirror the video feed on your own screen: To trick your brain into thinking you look OK.


"trick your brain into thinking you look OK" -> show you the version of your face you're used to, since most people most often look at themselves in ... mirrors.


That does sound like there are some remaining effects of colonialism (white=beautiful, etc.)


Yes, but it's also historically associated with class too (even in the west) - the lower class had to do manual labour outdoors and thus were exposed to the sun longer. As a result, they tended to be sun burnt and darker. Thus, dark skin was associated with lower class / poverty, and fairer skin implied that the person lead a more a sheltered life (due to their wealth).


Europeans have the opposite effect: rich people can afford to “go south for the winter”, so looking tanned is a sign of wealth and prosperity.


Unfortunately excessive retouching/filtering of digital photos has been a significant problem in East Asian cultures and that is spreading elsewhere quickly. Samsung is merely catering to the target market that has a demand for these things because beauty standards in most human societies are messed up.


> Unfortunately excessive retouching/filtering of digital photos has been a significant problem in East Asian cultures and that is spreading elsewhere quickly

I'm not sure why you think this was something that only happened in "East Asian cultures" until now?


It's on a completely different level there. My sister went to a wedding in China and sent me some of the pictures. The smoothing was so extreme I couldn't even recognize her. And they were photos from a "professional" photographer.


You can't have real people marring the image of a perfect wedding.


Apps like Meitu are far more prevalent in East Asian Cultures.

There are large percentage of the population who has not posted a non-retouched version their photos in years


A problem for who exactly? Asians seem to love it.


It's a problem for reality. Let's not bring culture into this. Those faces do not exist in reality.


I reject your reality and substitute my own.


I think you mean you upscaled their reality to your own :)


Do they? It’s highly problematic because it’s basically an arms race.

Everyone bemoans it but they do it anyway because otherwise you feel like you can’t compete.

It leads to wide spread body dismorphia and severe self image issues


Long term AGI play to make humans progressively less comfortable in meatspace. Reality getting you down? Just plug yourself into the ~~battery~~ Metaverse. Our self-disconnect system has a remarkably low failure rate, we promise.


Now the Meat-a-verse is something I could get behind.


Yes lets defend bullshit beauty standards that make people even less comfortable in their own skin.


That's your western opinion though.


Maybe, but matters of self-esteem should transcend culture. Nobody deserves to suffer unnecessarily because they are literally physically incapable of measuring up to some manipulated version of "beauty", and nobody needs products that cater to it.


Is this more your issue though? The "beauty apps" feature are super popular here in Asia. So again subtle colonialism where the west forces their ideal on Asia.


As if the east doesn't have its own things it would like to impose on the west.

I think self-esteem transcends culture. You can call it colonialism, but I say you want people to suffer. Because culture!

Also, unrealistic beauty standards are fairly universal. These awful beauty apps are but one solution, and they are gaining ground here in the west. Is this a case of eastern colonialism? What about TikTok?

Empowering people to feel comfortable in their own skin, as they are. How could anyone not want that?


They're not suffering though. And they're choosing it. That's my point. You seem to be projecting your own bias here.


Yes, I am projecting my bias. That's my opinion. That is the whole point.

If you cut someone, and they heal, that doesn't erase the fact that you cut someone. The important thing is not to cut them.

My friends here drive me crazy with their self-conscious body-shaming because they grew up constantly surrounded by impossible images of beauty. They do this so much that it is a nuisance. And what we have here is but a fraction of what it sounds like the east has. People living under your system must be positively insufferable then, without their precious beauty apps.

To be clear, I'm against any fake anything. Fake tits, fake lips, fake asses, fake selfies.... unless there are medical reasons it is all garbage.


OK well at least you are honest about your bias, so thanks for that. Using your analogy though no one is being cut here. You are projecting that they are cut, but they never are. And medical procedures to improve appearance aren't fake, they simply create a new look and in Asia they lead to ALOT of improvements in self esteem. The medical reason is it makes them feel better.


Imagine complaining about an an optional ai upscaler doing ai upscaling things. Literally shooting himself in the foot and crying from the pain


Imagine complaining about the idea of living in a society where pretty much every single picture we see reflects an AI's view of what reality "should" be rather than actual reality, and where all pictures of everyone's friends and family is automatically altered to suit the AI's view of what a human "should" look like. The only images of actual reality are those taken by the select few who both care enough and are technically proficient enough to get their cameras to not do automatic AI retouching.

Doesn't sound like such a ridiculous thing to complain about anymore, does it? Do you at least see why some people may be opposed to further steps in that direction, even if you yourself is fine with it?


You have the option to turn it off if it bothers you, it isn't even all that difficult to do.


The issue is one of defaults and changing and influencing the greater behaviors of society as a whole.


In this case you even have to intentionally press the enhance button.


Korea in particular is more known for actual plastic surgery than digital plastic surgery.


Samsung features have always overpromised, undelivered. Remember the facial unlock that would allow a photo of someone to unlock the phone?


That was a thing with one of the later Google Nexus phones, too.


Android added face unlock way back around 2012 for the Nexus 7 which was incredibly insecure. I think they eventually removed it.


Didn’t Windows have the same issue too?


The mobile OS? Did that even have face unlock? I don't recall that being a thing.


You mean the iris unlock? Turned out all those videos were fake. By eiter pressing the fingerprint sensor on the back. Or using a store demo unit.


Pixels are pixels. How is a camera supposed to know if a face is real or a color print?

Facial recognition is a bunch of snake oil


Apple uses a laser infrared dot projector.

Other manufacturers cheap out on that part, or prioritize design (no notch/camera hole) over security.


"Cheaping out" is not the best description. Apple has riddled the space with a ton of patents. https://insights.greyb.com/apple-facial-recognition-patents/


Microsoft is using IR to do Windows Hello so there must be some workaround to the patents.


How do your eyes do it? (You can do it securely without a lidar by thinking about this.)


How does Apple do it?


"The technology that enables Face ID is some of the most advanced hardware and software that we’ve ever created. The TrueDepth camera captures accurate face data by projecting and analyzing thousands of invisible dots to create a depth map of your face and also captures an infrared image of your face. "

https://support.apple.com/en-us/HT208108


Yeah, I noticed that. It's making the eyes more asian looking... The Koreans are into something...

/joking of course


Just checked twitter thread.

Lots of reply was just "so what?"

Obviously lots of people just don't care .


When I stop to think about it I feel like photography these days in such a weird state. We started with analogue photography that literally captured the incoming light, today we’re taking multiple photos and merging the results together… before long we’re going to be at the point where your camera software recognises you’re stood in front of, say, the White House, goes online, grabs stock photography of the White House and merges it into the shot you took.

I don’t know at what point it stops being a photo any more. In many ways I suppose it doesn’t really matter but I wonder if I’ll be looking at my kids graduation photos and seeing a smile an AI has approximated my child probably had in the moment based on previous photos and stock photography of graduations.


> I don’t know at what point it stops being a photo any more.

Personally, I think it's at the point that you start using AI/ML for anything other than the parameters involved in capturing the raw image (focus, shutter speed, aperture, etc) and the parameters of RAW processing/the sorts of editing that's allowed in a typical photography competition (so things like exposure, white balance, colour adjustment, etc).


I think the analysis can get a bit more blurry if you think about the types of *local* adjustments that are typically applied to great taste by professionals… Things like sky/foreground/background adjustments, spot adjustments, dodge and burn, etc.


I feel like it’s a pretty good definition to start with but I’d extend it a little further to encompass the techniques you mentioned and also add things like HDR bracketing, and astrophotographer techniques like dark frames and photo stacking.

Essentially I think as long as the photo is built up from data captured by the camera sensor(s) and no additional data is “added” to the photo, no upscaling, no substitution, no textural or morphological manipulation of objects in the frame, then it’s still a photo even with sophisticated AI processing. But the moment you upscale, airbrush, or morph anything… not a photograph.


One field were ML is making strides apparently, is auto focus systems in mirrorless cameras. Not tgat I would know, so. And tgat is a field I am totally ok with, guessing were the subjects is going, and where to focus on the subject based on user input, is usefull. Shutter, apperture and ISO settings are solved since non-figital cameras got a Program mode. ISO was, obviously, added with DSLRs. It is basically just a stupid control loop running along a curve based on the lense and camera you are using.


That's an arbitrary path-dependent place to put the line. Realistically if you spend infinite time and effort doing the kind of dodge/burn that every photography competition allows then you can make the photo look like anything you want.


I'll even allow it to change the exposure/balance/adjustment in parts of the picture separately and it's still a photo. Once you cross that Rubicon though I think it's a different thing.


> before long we’re going to be at the point where your camera software recognises you’re stood in front of, say, the White House, goes online, grabs stock photography of the White House and merges it into the shot you took.

You realize this is not too different from what the SuperResolution networks are doing, right ?


On one hand, I'm not a photographer, I don't want to be a photographer, and I don't want to spend 30 minutes trying to get everything lined up and everyone looking in the right direction and not blinking or sneezing or whatever. I just want a decent picture that captures people being happy doing a thing.

On the other hand, maybe the world needs more reminders that most people are pretty mundane, hardly able to get more than 2 people looking in the same direction for long enough to snap a shutter, just trying to get through life.


do you really need perfect photos? Does it add anything to the memory?

imho there's value in capturing that someone had a weird expression on their face or blinked.


Yes, it does add something.

To be vain about it, there's enough drama and messiness in day-to-day life. I don't want to be reminded of that when I look back; I want to see the good parts, that it was all worthwhile.

But, more practically even, photographs capture history. You may not be thinking about it when you're just taking a personal photo, but a fair bit of history does come from personal collections. Especially less significant figures or events where those personal artifacts are the only things that even exist.

Its difficult to gain much useful information when the images are blurry, distorted, or covered in shadow. You may appreciate the funny face, but can a historian or a great-great-grandchild confirm the person in the photo when they are covering half their face in a sneeze?

That said, the pendulum definitely swings in the other direction, too. Despite all the information we gather these days, there may be less meaningful history for the next several decades because you simply can't trust any of it is actually genuine.


Yes, for my personal photos. But you have to remember that people aren't taking photos for their own collection, they are taking photos to shine on social media.

Just look on Instagram. How could you compete with all those fit, beautiful and healthy people, if you don't heavily edit your photos.

(To be totally honest, I think that some beauty filters aren't as bad, because most people don't know how to hold their camera and tend to distort their face with closeup lenses.)


> before long we’re going to be at the point where your camera software recognises you’re stood in front of, say, the White House, goes online, grabs stock photography of the White House and merges it into the shot you took.

We’re there.

https://arstechnica.com/gadgets/2023/03/samsung-says-it-adds...


> I don’t know at what point it stops being a photo any more.

When any form of object recognition is involved.


Wouldn't that preclude using facial recognition for setting focus and exposure though (which has been a thing for years)?


I would say it's a matter of input vs output. Using AI to identify the best part of the image to act as input is fine (e.g. where to focus, what to base exposure/white balance on, etc). But if you treat that part of the image differently as part of a function's output, e.g. adjusting color differently on a face vs the rest of the image in post, that's where my sense of distaste kicks in. I'd rather that rely only on local color information (e.g. treating a part of the image differently because it's darker is fine).


Using facial recognition for focus is ok since the camera still faithfully captures the light entering the lens, but recognizing things in the frame to treat different parts of the image differently in post-processing is a big no-no.


If you don't let it do that, the picture will often be annoyingly dark in the foreground / blown out in the background, because cameras' dynamic range isn't anywhere near your eyes. This is called local tone mapping/dynamic range optimizer.

Sharpening also benefits from object segmentation because you don't want the effects to bleed over into different areas, you get halos that way.


Local tone mapping works just wine without object recognition. It's a dumb algorithm that goes over the image and normalizes pixel values relative to their surroundings to cram the higher dynamic range of the image into the lower dynamic range of the screen. Phone camera apps had it for at least 10 years, and people did it manually with exposure bracketing ever since digital cameras became mainstream.


Yes, but any local area algorithm benefits from segmentation because it automatically becomes "smarter". Upscaling is the most obvious one - the upscaled version of a blue object on a red background is not the average of the two colors - but it applies all over the place.

I don't know of any implementations of this, but an interesting one would be auto white balance. It's typically done as a global slider, but if the image has multiple light sources this doesn't always look good.

And actual straight-up "knowing what's in the scene" AI can help too; people shouldn't look sickly in your photos just because they're under a low-CRI yellow light. You probably want to know what color skin tones actually are.


> Upscaling is the most obvious one

I don't want any upscaling at all in my photos. One pixel on the sensor must correspond to one pixel in the output.


Camera sensors don't have pixels: https://en.wikipedia.org/wiki/Bayer_filter

JPEG files also don't have pixels: https://en.wikipedia.org/wiki/Chroma_subsampling

Then again, OLED displays don't either: https://en.wikipedia.org/wiki/PenTile_matrix_family


I'm aware of all these things but there still is a 1:1 correspondence between camera sensor pixels and JPEG [luminance] pixels. Yes, color information isn't complete, but it's the right balance. Enlarge it and you aren't adding any new information. Shrink it and you're losing information.


Definitely not true unfortunately. In particular it's not true for areas colored red or blue, because the green channel doesn't have the luminance information - that's why I mentioned them.

Lightroom and at least one current phone camera have ML Bayer demosaicing for this reason and it's visibly sharper.

(Note sharpening and enlarging are the same operation.)


Well, if that's the case, there are two options: the photographer git exposure wrong (easy to do in high contrast situations) or you create a HDR photo in post. More often than not so, just changingetering, or the composition by e.g. getting rid of some very bright sky or dark foreground, allows any modern (read: post late 90s) to get exposure right in P-mode. Added benefit of digital photography: you can check your shot on location, including histograms and even live histograms. Takes all of 2 minutes in the field with some practice. Of course, a smartphoe camera won't do any of that.


This is true for dedicated cameras, but smartphone sensors have less dynamic range and so can't naturally get good pictures in a lot of normal situations, especially mixed (indoor+outdoor) lighting.


Even when people were using analog cameras they would edit their negatives before making prints.

It was slower and harder to do than with digital editing, but it was absolutely common-place and normal.

Here's a good overview of some of the old processes and timelines:

https://fixthephoto.com/blog/retouch-tips/history-of-photo-r...


but it was absolutely common-place and normal.

It was slower and harder to do than with digital editing, but it was absolutely common-place and normal.

As someone who grew up in that era, I can say all the family photos have not been edited, nor would the majority of people who owned a camera back then spend the time or money to do so. Yes, commercial photo editing services have always existed, and of course you'd expect things like magazine covers and other prominent, publicly published photography to be heavily edited, but they didn't automatically do it to everyone's photos for free; and that's the huge difference between then and now.


I agree that the average person who owned a camera wouldn't have done this as a matter of course.

Honestly I was thinking more of the "portrait studios", rather than the home-shooter. Though I guess in teh early days of cameras there were a lot of people who did their own development at home, in ad-hoc darkrooms. Maybe they didn't edit so much, but it was within their means, and I'm sure it was more commonly done than we'd suspect.


> As someone who grew up in that era, I can say all the family photos have not been edited, nor would the majority of people who owned a camera back then spend the time or money to do so.

Whoever developed your photos would've adjusted the brightness and white balance based on what looked right to them if nothing else. Send the same negatives into two shops and you'd get different-looking pictures back.


They used these things: https://125px.com/docs/unsorted/kodak/tg2044_1_02mar99.pdf

It was an automatic process and colours were consistent. I find that reading theories about what might have happened 20 years ago is becoming pretty annoying. I shot some kind of Fujifilm mostly and if it was under/overexposed that was like your problem.


Editing -> retouching.


What's the difference?


That the editing/retouching changes small parts of the image and materially alters it to show or leave out stuff that was not in front of the lens and that changing whitebalance and such affects the whole image uniformly.


Retouching was absolutely not a common thing for normal hobbyist camera owners. Most weren‘t even aware this would be realistically possible to do with their photos.


I think the line can be drawn at adding information that wasn't originally available to the camera at that time.

So HDR photography (taking multiple photos milliseconds apart and merging them) is fine, because it's just using information that was available while you were standing there taking the shot. I don't see how this is really any different from a long-exposure shot in fact.

Using a polarizing filter to enhance the sky color and see the fish underwater is fine, because it's just using information available that time (and in fact, selectively ignoring some information, namely light polarized the "wrong" way).

Editing light levels differently in different parts of the frame is OK too because that light is still coming to the camera, you're just processing it differently.

Going online to grab ultra high-res stock photography of the Moon to enhance your shaky-cam photo is not OK, because that's adding information that the camera didn't have available to it (and worse, it's quite possible the downloaded information is inaccurate).

Going back to your White House example, what happens if someone takes a photo of the White House lawn because they want a photo showing the new trees or flowers that were planted or just bloomed. But Samsung downloads some stock photo to "enhance" the photo, and of course this stock photo was taken before the plantings, or at some other time of the year when the flowers weren't blooming. Now this poor sucker has a completely invalid photo showing something that didn't exist when he took his trip to DC to photograph the White House lawn, and probably won't notice until he looks through his vacation photos on a large monitor.


> I don’t know at what point it stops being a photo any more.

Considering that Ansell Adams would spend weeks reworking his negatives to get the perfect print I think that you'll find that horse bolted a very long time ago.


Difference being, he worked in his negatives, he did add some shit to them he found on the internet or some database.


When you are no longer graphing the impacts of photons on a sensor, you are no longer doing photography.


It's just stupid that we gum up relatively simple pipelines of input -> output in favor of these longer and more complicated bullshit... and for what? So unpracticed and uneducated stupi...er, laypeople can feel better about themselves? So companies can cheap out on hardware or optics investments? So some advertising jackass can sell a few more smartphones by manipulating peoples' emotions?

There is so little desire to capture the world as it is. We humans have to insert our bullshit desires and artifice into everything. And winner-takes-all capitalism means the majority market of morons is literally the only market that can be serviced cost-effectively (meaning... at all)

God I hate people so much. So much potential... wasted.


Well I don't think your approach of telling everyone they're stupid is going to sell them a camera…


Who said that I am trying to sell cameras?

I want more investment in better optics and hardware. Not AI hallucinations.

Do you think some random internet comment is going to change anything? Even if it is worded in the best, most persuasive manner possible?

"Hmm, that pdntspa guy on Hacker News is right goshdarnit!", says the Megabux CEO as he jumps out of his chair. "Sheila, get me the engineering director on the line. Tell him to cancel all our AI products!"

Yeah.... not going to happen. Hence, a rant.


[flagged]


Extraordinary claims, extraordinary evidence.

I’m not familiar with Google here, but Apple is quite loquacious in its descriptions of how it approaches computational photography, with extensive use of bracketing and sensor fusion – and nothing at all about generative embellishments.

Not even saying it’s impossible, but I would be absolutely shocked if Apple were actually using anything outside a lot of bracketed exposures and optically-modeled diffs between multiple sensors. More than anything, it would be distinctly off-brand.


> I’m not familiar with Google here, but Apple is quite loquacious in its descriptions of how it approaches computational photography, with extensive use of bracketing and sensor fusion – and nothing at all about generative embellishments.

They put very obvious fake bokeh in their photos, and their users seem happy about it. I don't know what you mean by "generative" and how you'd draw a line between "generative" and not, but they're absolutely not giving you the straight-up picture you took.


This is the Apple that automatically corrects what you're looking at in FaceTime calls so it looks like you're looking at the camera instead of what you're actually looking at? It's that Apple that you think it would be off brand if they were to change your nighttime photos to look better, even if what they're depicting isn't actually real? That Apple?

I don't they're doing a fake milky way, but I don't think that they'd be above it if they could come up with a reason to.

https://www.fastcompany.com/90372724/welcome-to-post-reality...


This is the worst, most insidious poison of them all. The very foundation of all propaganda.

"X did Y"

And when it turns out they didn't:

"It doesn't matter because they WOULD"

(Disclaimer: I have no idea if Apple does or doesn't do the milky way thing)


Mortenjorck went back and softened their wording but when I originally responded, they claimed that Apple could never do anything remotely approaching that. I haven't edited my response, but this isn't a case of me claiming X did Y! but it turns out they didn't, and then me saying but they would have!

I'm saying X did Y, which they did, and it means that Z is at least on the table, where X = Apple, Y = FaceTime Eye Contact feature, Z = Milky Way enhancement.


There’s no iPhone that adds a fake Milky Way to night sky shots. Did you just make that up or did you read it somewhere?


> The iphone will add a faint milky way for long exposure shots that are clearly fake.

Where have you seen this? I can't find anything about it online, and I imagine it would be even more egregious than the Samsung fake moon, probably putting it in the wrong place or orientation in the sky sometimes.


Ok maybe I'm naive here but I always thought they found a way to let the camera sensor take in more light over a period of time and render that way. I didn't think they just added a faint milky way. Do you have a source for this?


You're right and OC is wrong. My Nexus 5 from 2013 could take decent single exposure night sky photos (after post processing the RAW images). The Google camera now just automates most of the manual work (adjusting exposure/sensitivity, stabilizing, aligning and stacking multiple images, post processing) see https://ai.googleblog.com/2019/11/astrophotography-with-nigh...

I'm sure the iPhone is just as capable


this almost the advertising said. apple always said that new iPhone have better camera. did them lie to us? :(


The "making up leaves" story was incorrect. https://news.ycombinator.com/item?id=29750660


Samsung moon debacle was a thing using AI for upscaling shots


No it wasn’t.

It recognized the moon and replaced the real moon taken by the camera with a stored stock photo pasted over it.

There was no “upscaling” involved.


Well sorry this is what I meant with AI upscaling vs old regular upscaling which doesn't make stuff up.

It's like when Ryan Gosling faces started being hallucinated into generated images because of overfitting of his image in the training data set


Strange clickbait link with an embedded YouTube video. Here is an easier to follow article:

https://www.theverge.com/2023/3/22/23652488/samsung-gallery-...


Best sentence in the article: "I wasn’t able to reproduce these teething issues myself using the same version of the Gallery app on a regular S22. I tried using the feature on half a dozen photos of babies (and even a screenshot from the updated, less-toothy Sonic trailer) and never saw anything like what this user got. I also wasn’t able to find any other people reporting this type of issue, so it’s impossible to say for sure what’s going on."


Thanks for the article link. Most news aggregators these days just have a click bait title with tweets. It's annoying.


The Google Pixel 6 Pro's (that I bought in Japan) selfie camera in portrait mode also changes the proportions of my face, which I cannot disable. Even with all enhancement features turned off. (Non-portrait mode is fine though) Not a fan of how integrated video and photo becomes with processing. Taking all my photos in RAW now, just to circumvent all that.

And even that is soon getting changed, with RAW getting redefined more and more to pick up the data later and later in the image processing pipeline...




A whole new dimension of almost Lovecraftian horrors opens up, and this half assed AI shit is about to permeate every aspect of technology. Imagine what'll happen to the few people that "fall through the cracks" of some unknowable edge-case in various random systems.

Toothless babies? Perhaps your likeness is "invisible" to self driving cars. No one can assure you otherwise.


This is going to make its way into recruitment, hiring, percormance evaluations, and the results will be absolutely hideos.

And self driving cars and justice.

But humans do not have the catastrophic failure modes the way AI does - like adding teeth to babies.

Adding AI everywhere is like adding nuclear power into every car - there is a new catastrophic failure mode and possibility of malevolent intent that we do not know how to deal with.


Something people in this thread are missing, is that this doesn't have anything to do with the process of taking a photo (unlike the moon issue). The "Remaster" feature is part of the Samsung gallery, and you can use it to "remaster" an existing photo in your gallery.

Obviously, adding teeth to a baby is insane in any situation, but it's not like you're taking a photo of a baby with no teeth and then seeing teeth in the photo.

I've used the Remaster feature a few times on outdoor shots and it generally does a good job of fixing a bunch of issues with a photo and enhancing details etc.


I'm sitting here and thinking "Why would anyone ever use this remaster feature other than the 'wow AI incredible' novelty?".


I don't think anyone is missing that. We all get that this is an optional feature.

A "remaster" feature should never add teeth.

Nothing wrong with adding teeth, but it should be called the "add teeth" feature, not the "remaster" feature.


Oh yeah, I remember LGR made a video about this when Photoshop added a similar feature. https://youtu.be/hq8DgpgtSQQ

It was funny then, and it's still pretty funny now.


First your kid doesn't have teeth and you want them to, now your kid has teeth and you wish they didn't...good grief.

Pick a lane parents....


That doesn't even make sense. What parents are you even imagining that are wishing their baby had teeth, or that their kid didn't have teeth? That's not a thing. Even if it were (which it isn't) the physical reality of kids having teeth or not is totally unrelated to the topic of janky image filters.


And buy a film camera!


just stick to painings. Preferably in a cave. The OG


Smartphone camera is closer to a cave painting than dumb camera, the only difference is that neural net is the one doing the painting. Dumb camera only captures light


Does anyone recall the time HN went ballistic offering proof of how the iPhone's AI interpretation was screwing up a photograph and then it turned out there was a leaf in between the subject and the camera?

https://news.ycombinator.com/item?id=29739235

https://news.ycombinator.com/item?id=29750660

Maybe this is true, but it's worth remembering that this community is at Nigerian Scammer Mark level of gullibility.


> Does anyone recall the time HN went ballistic offering proof of how the iPhone's AI interpretation was screwing up a photograph and then it turned out there was a leaf in between the subject and the camera?

No, I don't recall that. It seems you're manufacturing history. Did you see the photo? [1] It did screw up it up. The fact that the leaf existed does not negate the fact that the background around the leaf was changed to delete the face.

[1]: https://twitter.com/mitchcohen/status/1476351601862483968/ph...


It's okay. Your recall isn't necessary. Just the ability to click the second link.


He then goes on to say it didn't happen?

https://twitter.com/mitchcohen/status/1476951534160257026


Yeah except this stupid AI enhancer junk is marketed as a feature on the samsung phones.


This appears to have been duplicated by multiple people.


The other event also was. Read through the thread and you'll find that multiple HN commenters talk about how they were affected! No one posted images on that occasion, though, and Samsung goofing is a lot more likely than Apple.


Wxcuse me, What? Are the financial exprents that predicted 2008 crash a year too early gullible?

We it's only a matter of time untill this type of fraud happens. It's just not possible to predict exactly when


Does anyone find it strange an article about photos only has videos?


This one, at least, isn't done without you asking. It's not the sort of thing I'd complain about more than any other lossy photo-processing system.


For people that care a lot about image "accuracy" - just get a decent camera with large sensor. Then you you will not need insane level of postprocessing and manipulation smartphone does to make something decent out of small sensor data.


Computational photography gone wrong. There’s one thing to algorithmically improve a camera output, another thing to ask AI to upscale and add in non-existing features


No, those two things are the same thing.


It depends on what "algorithmically improve" means. If it's just a colour curve correction, HDR blending, or lens geometry fix, then I wouldn't put them in the same category at all.


They're obviously the same thing in Samsung's unfit pipeline but other vendors seem to separate touching image content and only fixing optical aberrations/denoising/tone mapping.


Overtime more and more of the infrastructure that touches user data will be a sandbox like this: https://appleinsider.com/articles/21/01/29/apples-ios-14-int...


Fortunately this is opt-in, but if it's automatic and such "enhancements" can indeed hallucinate things which aren't there, I wonder about the implications of such processing on videos or photos being used as evidence in a court of law.


When people were debating the Samsung moon replacement thing I tried to come up with some examples of how this could go horribly wrong, but never managed to come up with something quite this bad. So kudos to Samsung, I guess, for doing it for me?


This only happens if you hit the "remaster" button. Which I will never use, so it doesn't particularly bother me.


Don't press "remaster"? It's just a feature...


That was Huawei though.


No, it was Samsung. Unless Huawei did it too.

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...


Huawei did it too, although Samsung was more recent https://www.androidauthority.com/huawei-p30-pro-moon-mode-co...


Huawey added their own photo on top of yours. Samsung upscales.


I have a Samsung phone and it feels like it's a combination of Microsoft's software quality, with IBM's innovation capabilities and Facebook's customer service. Quite a treat!


>... and Facebook's customer service.

Is that better or worse than Google's customer service?


IBM is second only to Samsung on amount of patents per year.


I am waiting for a smartphone without actual physical camera, that generates all pictures using AI.


Why would anyone buy a phone from Samsung and not expect it to do weird stuff like this?


The alignment people have a point.

We have no way to control these stupid things.


So..you apply an AI remaster filter to a photo and complain that it is remastering your photo?


"Remastering" implies tweaking up the colors or adjusting the exposure in a way that the initial photo couldn't. This is fabricating new details that didn't exist at all.

It would be like a remastered music album adding entirely new instrumentation.


I have remastered albums that replaced sound effects.

Actually there's a popular trend of recreating old video game soundtracks with uncompressed samples and calling it "remastered". Though, it looks like the most 'reputable' channel switched to "restored" for newer videos.

https://www.youtube.com/@ChurchofKondoh/videos


And if the instrumentation is picked automatically by AI and it decides to add a fart sound.

Of a horse.

Underwater.


We've got that! It's great.

https://www.youtube.com/watch?v=OOdyTn2X9oQ (Adobe Enhance for podcasts from this year)

https://www.youtube.com/watch?v=wHduATM-o7M (Microsoft Songsmith from 15 years ago)


This is objectively one of the worst websites I have ever seen. How did this make it to the top of HN?

edit: Ah, I meant one of the worst articles I've ever seen.


Boingboing has a long history


looks lime times are hard




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: