The article alludes to something I've noticed since upgrading from the 5c to an 11 pro last year - The computational aspect and/or current hardware limitations mean these phones can shoot images that look great on the phone.
I mean, you scroll past images on Instagram, look at your own photos on the phone you took them with, look at your images on your Instagram feed on other people's phones or on the Instagram web app, and yeah they look pretty good.
Get them on something bigger though? Look a little closer? Use decent resolution monitor? A reasonable sized print? They look bad. I'm not even talking about pixel peeping here, which is bullshit. I mean they look overprocessed. Like the machine took a course in photoshop on YouTube and didn't finish it.
The examples in the article show this, they're full of little mistakes that don't happen when you have proper hardware - larger aperture lenses on larger sensors. Proper graduated neutral density filters to prevent halos. Sensors with large enough area/photosites to give better dynamic range or better high ISO performance. Real camera movements to tilt/shift/rise/fall/swing.
I mean, most of this is boring technicality stuff that doesn't really matter in the grand scheme of "is this an interesting photo" type considerations. It's more a case of computational photography is currently trying to climb out of its uncanny valley. It'll get there eventually.
For now we're still at the point of "Eventually, anyone that values photos will carry a standalone camera." I say this as someone who shoots mostly with an iPhone, but still carries around a 4x5 camera when it matters.
Edit: to expand on this a bit I've just uploaded some images I shot with the 11 pro a couple of weeks after I bought it back in October: https://leejo.github.io/2020/04/28/off_season_ibiza/ # the caveat is that I was still figuring the phone/camera out but wanted to see "what it could do out of the box" - and yeah, the images look good on the screen with very minor lens corrections and levels in lightroom, but look overprocessed even with my attempts to pull that back: the sky in the photos looks added in, there are halos around parts of the images where HDR is trying and failing, the sharpening is too aggresive (real life doesn't have "sharp" edges), and so on. When you view these at larger sizes than seen in the blog post they look much worse.
I think you have somewhat unrealistic expectations of what even a decent photographer could achieve with a graduated neutral density filter (not to mention tilt/shift lenses... maybe 0.1% of even professional photographers use those).
The photos in your blog post would have been really hard to achieve with "traditional" photography methods to cover the huge dynamic range (a graduated neutral density filter won't cut it for a street opening facing the sky). Most experienced photographers would employ some form of exposure stacking in that scenario, same as the iPhone did. Of course having more control over the blending would be great.
I think if you tried to take the same pictures with a traditional camera you would be surprised by how good the iPhone pictures are compared to the effort it takes to achieve a comparable result.
Oversharpening is a curse in the industry to be sure, unfortunately to most people an oversharpened image looks "better".
Oh no doubt - it still takes experience and technique to achieve a good effect with those tools, and of course they can't be used in all situations so other techniques are required; but it still seems to me that "out of the box" the software is applying HDR like techniques when it isn't needed, or it would look a bit better if every part of the exposure wasn't normalised so much.
It seems that the software is doing a negative film photographer's trick of "expose for the shadows and the highlights will take care of themself" but rather it's "expose for the shadows and the HDR will take care of the highlights" - ending up in this overprocessed looking fake sky effect.
It's like there can't be any ambiguity anymore in a photo due to the exposure, the shadows and the highlights can't clip, we have to see everything. That seems a big change in mindset to me.
I've been giving this some more thought and I think in many ways the photos from modern smartphones are more similar to paintings than what we are used to from photography.
They try to create a visual representation of what our visual cortex would perceive the scene to look like.
A painting of a room with a window would typically show both the room and the scene from the window "perfectly exposed", like the iPhone is doing with your skies.
This looks very strange to someone used to images from more traditional cameras, and the artefacts like halos stand out.
Side note: I'm surprised by how heavy-handed the HDR looks in that particular set of photos. I'm used to seeing modern smartphones do a much better job of auto HDR in most situations.
Photography on the IPhone is good enough for the majority of users. There’s a physical resolution limit on lenses. Hence why the sensors have stayed the same size.
The iPhone offers strong post processing tools for photos and video that make it easier and more accessible.
I’d love a smartphone with an Arri Prime lens equivalent, but it would never fit in my pocket
That’s because they’re only looking at the images on their phone.
As an aside, I’m a professional photographer and my greatest wish would be that people shop around for a photographer on a desktop or at least a tablet. Incredibly tiny effective image sizes can hide a lot.
This. I have no problem with the Apple enthusiasts diving into the details, but really, 98% of the people taking pictures with their smartphones are happy with iPhone 5 quality photos.
I maintain that the 6S remains all that the majority of users need. Anything added after that was just to sell phones. And I’m a big Apple fan.
This is really cool. One of the best things about Rome was exploring at night. I took plenty of awesome pictures with the 6s, but this takes the cake. Thanks for sharing!
I took some pictures of some distant mountains with my iPhone 5 and I CRANKED the zoom to frame the mountains the way I wanted them. All the the resulting effects, plus the twilight of Alaskan summer, made intriguing pictures, like those shots in movies of the mountain the group is striving to reach. I'd say painterly but I'd flatter myself. I had a nicer camera available but not in hand.
Just like Instagram has developed its own aesthetic, so has "shot on an iPhone". I fully agree with you, those images have the typical "surface blur" look that all tiny lens + backlit camera combinations produce.
I'd say it's even worse with video. My phone can make 4K videos, but watching them on a 4K TV is painful. They'd probably look better if you would downscale them to 1080p and then re-upscale them to 4K. They'll be a bit more blurry, yes, but they won't have this artificial flat look everywhere.
Memorizing high-frequency patterns and then reproducing them is one of the very few tasks that AI is actually good at. As such, one can use the raw HDR exposure data from a phone and the noisy raw image data from the sensor and produce naturally-looking high-res images from it. The way that it works is that the AI has memorized the small-scale details from millions of good photos taken with large lenses and proper equipment, and that way it can replace the noisy phone sensor data with its noise-free counterpart.
However, such an AI currently needs 10GB of GPU RAM and it runs for 2 seconds on my 1080 TI for a 4K image. So we'll need another 10x in mobile GPU performance before it becomes feasible to integrate such technology into camera apps. And then of course another 20x for live video processing.
I think these models will drop in requirements pretty dramatically - there is a lot of work in finding sparse subsets of NNs that actually do all the work for example, and embedded hardware is getting a big bump in neural network inference due to massively parallel FMA blocks being added to their DSPs.
Also remember that the majority of academic work is done based on benchmarks that don't include inference time or hardware requirements. As these things start becoming useful without these restrictions industry will start tweaking the cost functions that are being minimised to make it useful in the real world too.
Sadly, I don't have a publicly available link for this.
But no, this is not AI upscaling.
Instead, it is representing the input image in a feature space trained to store natural images and then reconstructs the output from that intermediary representation. If chosen well, the intermediary will store details relevant for high quality images, but discard those specific to low quality images, like noise.
It's called a convolutional autoencoder with bottleneck.
Actually, that might be a pretty nice addition to Adobe's Lightroom cloud service. It already syncs between my desktop and my phone, so why not run some auto-improvements upon import.
But I believe for most users, it needs to be close to instant on the device. Nobody wants to wait 5 seconds to see if their selfie turned out OK.
There's also one big area where photos have noticeable artifacts even viewed on the phone: portrait mode. Almost all portrait mode photos will have areas that are blurred when they shouldn't be, or in focus when they shouldn't be. Part of an ear or piece of hair that's blurred out. Or the background between your fingers is in focus.
I've found the iPhone's portrait mode to be near unusable despite looking at first-glance like it might be a great shot. Hair and ears are always an issue (and not something you can avoid in portraits very often...) and another classic example is with a wine glass where the surrounds blur but the vision through the glass is sharp.
Interesting to note is that the portrait mode that comes with iOS and Android are radically different, built on completely different technology. So you will see different artifacts on each.
To be fair the examples in the article are trying to show what happens when you use this post-processing in situation it was not designed for, sometimes going to extremes to deliberately demonstrate failure modes, so it would be surprising if it didn't have weird artifacts. None of those photos are using it as intended.
I agree some HDR skies in your shots do look weird, but frankly that's probably just HDR. How do you find the phone's photos when you switch off the effects you don't want?
First I looked at your pictures and appreciating these are tests I thought they were nice. I liked the composition in pretty much all of them, thought they were generally pretty good pictures and couldn't see what was wrong with them.
Then I read your description and if you hadn't mentioned the skies look added in I wouldn't think twice about it – now I can't stop seeing it. It really does touch on this feeling I've had towards my own photos for some time, thinking something is off but I don't have the expertise to understand what. I think you may have just nailed it for me – thanks!
A couple of points. On this. Phone cameras are mainly used by people who don't intend to do any (or only very minor editing). Overall, both IOS and Android can produce great results even with mediocre lenses/sensors. Computational photography basically enables them to compensate for that and machine learned configuration of editing steps a pro would do manually actually produce nice results. Of course these results are not intended for further post processing and lot of the applied edits are lossy in the sense of compressing dynamic range, clipping blacks for contrasty look and feel, blurring out noise & detail not visible on a tiny screen, etc. But considering iphones have very high resolution screens these days that should still be fine for that and simple prints. But lets be real, most of this stuff gets produced for and consumed on phone screens by people who are not even close to being decent amateur photographers.
But for anyone who is used to dealing with proper gear (cameras & lenses) and having the freedom to make very subjective artistic choices for things like white balance, local contrast, tone mapping, noise reduction, etc. what a phone does is always a compromise. And also it's a 1 size fits all solution optimized for mass appeal, which usually means "oooooh blue sky, pretty saturated colors, super sharp/contrasty photos", which as any postprocessing enthusiast knows are all variations of compressing dynamic range (i.e. super lossy steps).
I fall in the last category but I do appreciate the best effort my cheap Nokia 7 Plus camera app makes. I occasionally use open camera to get raw dngs for editing in Darktable but they actually do a decent job with the jpgs given the noisy original. My shiny new Fuji X-T30 produces much nicer raw files with tons of dynamic range and detail that simply is not there on a mobile phone. All of the AI stuff that happens in a phone you can also do with a decent camera but it requires lots of specialized tools and tweaking (e.g. hugin for hdr stuff).
But the best camera is the one you have with you and for most of us that's usually a smart phone.
While I agree with you, it seems that most people around me just don’t see it. Even photos in portrait mode which don’t hold up on a phone sized screen are seen as good to many of my peers and I’ve become the weird one for asking people to take pictures without excessive processing included.
I wonder if in a few years we’ll look back at the photos from this time and think they aged poorly.
I mentioned this in another thread recently, it I think most people do notice when they see the difference. I take a lot of photos with a real camera and sync them all to my iCloud Library. When showing someone a photo, they almost always comment, even at very small sizes or before the full resolution version downloads, “Wow, you took that on your iPhone?!”
People see phone photos all the time so they’re used to it and normally fine with it, but given an opportunity, they will notice and appreciate quality photos.
I've considered switching to a dumphone and camera, but anything that doesn't take "live photos" isn't something I'm going to use, at this point. It's the last feature added to a computing device that made me go "holy crap, that's like actual magic!" when it was announced, and went on more-or-less perform as advertised.
Windows phone had this feature first and I remember it was my go to feature to impress iphone and android users with, together with the low light ability which was comparable to the iphone X at the time of the iphone 5 (on lumia 920). I went from a lumia 950 to an iphone se (first gen) and the decrease in photo quality and featureset was dramatic.
Too bad windows phone 10 was such a slow and buggy mess, and microsoft had by that point pretty effectively managed to kill the app ecosystem by rebooting it three times.
They capture a second or two of (lower-resolution) video frames before and after the photo itself, plus audio. The UI on the iphone is that you hold your finger on the photo and it plays the "live" version. Under the hood IIRC it's just a short video + still bundled together in a container. The effect is reminiscent of the magical photos in the Harry Potter films, and when looking back at photos of one's kids from years prior it really is magical. To take them, the UI is the same as taking any other photo, except that you must know to keep the camera steady a second or two before and after taking the photo itself (it'll still be captured just fine regardless, but the "live" part will be a blurry mess if the camera's moving) which is the real genius of it—no extra effort, really, and it's automatic, provided live photos are enabled, so you don't have to decide whether you want to take a video versus a photo, you just take the photo and get a tiny little video to go with it.
Are the other frames lower resolution? The short videos are quite low frame rate. On iPhone X at least you can choose a different frame to use as the “main” photo. They don’t seem to be lower resolution.
I don't have any specific knowledge of Apple's actual implementation, but I've always assumed it used the full-resolution still image as a keyframe (I-frame), and the video clip as B-frames before and afterwards, compensating for changes over time. Choosing a new key photo may be full resolution, but is likely to be degraded in sub-perceptual ways.
It does pull multiple full frames off the sensor in rapid exposure tho, and chooses whichever one it thinks best (blur, ML, whatever) for the initial key photo.
I think there's a (totally normal and full-res) photo and separate video, if you look at the actual files. I don't know how that works with the ability to choose a different still.
I've always found 'proper' digital cameras are also in their own uncanny valley compared to negative film. Especially with older sensors, (and as an amateur with very little skill) it is so easy to blow out highlights with a digital camera in a way that looks completely unnatural. It is like its own poor emulation of film.
In a way the iPhone and recent HDR techniques on smartphones improve on this as it's now quite unusual for a phone to overexpose highlights. The tone mapping could be improved to lessen that 'fake HDR' effect you can see in your examples, but that doesn't seem like it would be too difficult.
Although this is really quite recent. It used to almost be a hallmark of my iPhone that, unless I took care, things like clouds would be blown out. I was somewhat prepared for this as I mostly shot slides pre-digital for a long time after I didn't have easy access to a good darkroom any longer. And slides don't handle over-exposure well at all.
>>Get them on something bigger though? Look a little closer? Use decent resolution monitor? A reasonable sized print? They look bad.
So I don't have an iPhone, but I do currently have in my home office two pictures taken with my OnePlus 5T, both printed on A3 size paper and framed, hanging on my walls. They look absolutely perfectly sharp and I'm constantly impressed that something so good came out of a smartphone.
I think Apple specifically targets TikTok, Instagram 'video' crowd with this device who earlier used Android or an older iPhone.
Videos shot with iPhone SE 2 is indistinguishable with iPhone 11, which by itself was class leading. So, naturally people are going to take notice of better quality videos on these platforms.
This strategy is likely going to be successful in US, what I'm doubtful about is with > $550 price tag whether iPhone SE 2 would make it a viable choice in developing markets; even though TikTok, Instagram are quite big there.
They all look fine to me on a 17" screen tbh. I couldn't give 2 shots about professional photography, and neither could 90% of the people viewing your Instagram.
That's pretty much the age we live in, the best Youtube videos, for example, have extremely bad videography and no audio processing from a professional PoV, but they get millions of views because they're funny to most people.
I mean, should I even try to mention the POTUS? (commence the downvotes heh)
I have no idea why people's standards have regressed, but they did.
I don't know what are you talking about. I am a regular person, who never spent time obsessing about photography, and all the pictures in the article on the left (I assume they took them with the phone) look very very good on my 40" 4k screen.
I installed another camera app a couple of months after getting the phone, which allows me to do that. The phone has essentially replaced my Fuji x100t for one project, mostly because of its wide angle lens.
AFAIA the default camera app doesn't allow you to shoot in RAW, it doesn't even give you the fundamental manual expsure controls, and that's part of my point - the defaults are designed to make it easy and make the photos look good on the phone, which is what 99% of users want.
I'm absolutely fine with that, BTW, but it's rather frustrating that the out of the box camera app doesn't include the ability to control the three fundamental exposure factors. Not even hidden behind an "advanced" setting or anything.
The system camera app isn't meant for people who want manual controls. It's the modern version of a Polaroid. The way it works is fundamentally incompatible with manual controls, and adding a separate manual mode would confuse the vast majority of people who have no interest in advanced photography.
I'd argue the other way - the camera on modern phones is a massive selling point, I know many people that choose their phone based on it. It therefore makes sense that the out of the box should include manual controls somewhere as an option.
I know Apple like to push the whole "shot on an iPhone" thing, and they want to make sure the default app is giving the "best" quality. So maybe that's the reason they don't include them?
And to take your analogy further - I have a Polaroid 195, it has full manual controls. Of course, it was aimed at pro photographers who wanted those controls, and sold at a premium.
And to prove my point, if you ask someone old enough to remember a Polaroid camera to describe it, they'll describe the SX-70. It was revolutionary for how it put photography into the hands of non-photographers. The 195 doesn't even warrant a footnote in history.
True, but since fstop is a ratio (length of lens over width of aperture) at this scale the more important metric in image quality is sensor size. You can compare the depth of field in a 35mm shot with a 4x5 image made at the same fstop to see the wildly different dof characteristics.
When it comes to the overall quality of the image, counting both the size of the sensor and the absolute diameter of the aperture is counting twice. The amount of light that hits the sensor, for a given field of view, is entirely determined by the absolute diameter of the aperture. Bigger sensors collect more light because you can use a bigger absolute aperture for a given f-stop and field of view. Or if you prefer, a larger absolute aperture collects more light for a given f-stop and field of view because you can use a bigger sensor.
Obviously, the format has implications for DoF, but those can be either positive or negative. When I shoot 4x5, I spend most of my time fiddling around with movements trying to get everything in focus :)
Exactly. The thing that really matters for light gathering is Etendue -- the product of the area of the source and the solid angle that the system's entrance pupil subtends as seen from the source -- which is conserved (in a lossless optical system).
There's more nuance than that, because sensors have a certain fixed amount of area, given a resolution, dedicated to control circuitry. Larger sensors make the control circuitry a smaller fraction of overall area.
The focal length of the lens and the aperture diameter is what controls depth of field. The sensor size doesn't actually matter; you can crop a photo later (which is like using a smaller sensor) and the depth of field doesn't change. The way it gets involved is that a certain focal length with a certain sensor size will result in objects in the photograph being a certain size. That means that a 35mm camera will use a 50mm "normal" lens, and a 4x5 camera will use a 150mm "normal" lens. Attach a 50mm lens to a 35mm camera, take a picture, replace the camera with a 150mm lens on a 4x5 camera, and objects in each photograph will be about the same size. But, the 150mm lens will obviously produce more blurring away from the focal plane; that's just what longer lenses do.
The iPhone's telephoto lens has a 6mm focal length, and so tends to produce an image with more in focus than a lens that's 25x longer.
(Why do phones use a 6mm lens and not a 150mm lens? Because they don't have 150mm to spare between the lens and the sensor.)
To add to what you said, the depth of field calculation relies on the circle-of-confusion that you pick (which is based on the viewing distance and the size of final reproduction), and is therefore also closely dependent on the resolving power of the sensor, and thus pixel density. In addition DoF also depends on the focusing distance.
>(Why do phones use a 6mm lens and not a 150mm lens? Because they don't have 150mm to spare between the lens and the sensor.)
That is not true for today's lens formulas. You're probably thinking of old singlet designs from centuries back. For e.g. reverse-telephoto lens designs allows you to make lens much shorter than their focal length. Its all about getting the final magnification by positioning positive and negative lens elements, and ofcource other corrective lenses appropriately in the lens formula. This is why you can use "tele-converters" on lenses to increase their magnification (and thus their effective focal length)
seeing as how most of the photos taken are meant to be consumed on the phone anyway, it's like bagging a mirror-less camera because you wanted a proper banner sized photo
Something that I have wondered about is the photo quality of film photo enlargements vs. digital (which I guess is just zooming).
A film is continuous right, so no amount of digital encoding would truly encode a strip of film. Of course there is a point after which it doesn't matter, but I am more interested in the technical look and feel of film vs. digital. A few years ago I looked at a film enlargement to A4 in a scrapbook and I have to say it had a different feeling about it. There were also funny artifacts of the kind that make superstitious people believe in invisible spheres that ehm are not invisible on film...?
If you contrast this with a CD recording, the bit rate is limited even if it's an audio CD due to the way they were recorded. Vinyl is supposed to be limited to 48 bits, but perhaps you can engrave in an analog way?
Vinyl is limited to around 12 bits. It has a dynamic range of around 70dB. Vinyl sounds "nice" because recording and playback add distortion and noise at the low end, which gives bass and drums more weight, while the top end is smooth and extends higher than CD - which sounds like "air" because any intermodulation distortion in the amp and speakers will fold it back to audibility, even though nominally it's ultrasonic.
Chemical film creates distortions of its own. Colour film often pushes the reds to make skin tones, lipstick, and sunsets warmer. You can choose how much grain you want and B&W is often shot on fast high-grain film to give it a gritty textured look. Landscapes need slow micro-grain film to look as lush and detailed as possible.
And so on. The point is chemical film isn't any more real than a digital CCD.
Digital consumer displays all have colour distortions and dynamic range limitations, and so does chemical processing. Even if you apply no deliberate processing to an image, by the time it ends up on paper or a web page it usually looks quite different to the scene you'd see with your eyes.
So yes - the "feel" is different. And that's a creative choice - or it can be, for the professionals. Polaroids look very different to medium format.
Amateur phonecam users get one-size-fits-all enhancement whether they want it or not, in addition to some basic editing options. If they didn't their lifestyle shots on IG would all look lacklustre, and they wouldn't be happy about that.
Assuming that you have the technical ability, how many pixels do you need at how many colours to completely simulate film? I think that should refine my question.
Your common high end DSLR definitely outresolves 35mm film nowadays, especially at higher ISOs. It’s not a 1:1 comparison, but film does have grain and that grain size is the “limit” of the data.
That said medium format film and larger would still be hard to beat.
As far as tonality, a modern sensor should also have greater dynamic range than almost all film. They do behave differently though - highlights are harder to overexposed on film but dark areas are easier to underexpose, compared to digital.
In theory, if you had the money and incentive, you should be able to develop films that are much higher resolution than current digital hardware? Like the large format films you refer to.
Film is certainly not continuous—its spatial resolution is limited by the size of the individual light sensitive crystals that capture the image. These appear as the grain structure in enlargements.
Interesting question! Actually all photographic imaging systems run into physical limits set by the quantized nature of light. So even if you could shrink things down to one molecule, you’d have other problems first, principally an effect called “shot noise.”
Essentially, a film grain or a cell on a CCD sensor is like a “bin” that collects and counts photons. As you make the area of the bins smaller, there are fewer photons collected in a given image, and statistical fluctuations from sampling become more relevant. You get more “noise” in the final image.
Those physical limits are surely though at a much finer resolution that common high end digital cameras?
I understand that you are saying that you see the effects even if you are not a the "molecule" level as you say, but I would think that the hard theoretic limit of photons should be at a finer granularity than high end digital cameras.
With the exception that holographic film images the diffraction pattern produced by two interfering laser beams, so typical grain size there has to be on the order of tens of nanometers. It can certainly be done, just is pointless for normal film =)
It can be quite small, but I believe it largely depends on the ISO rating of the film. Larger grain = more light collected.
I shoot wetplate collodion sometimes and that has an effective ISO of under 2 and the detail is astounding - especially for a process that’s 150 years old.
I don’t get it, it’s a phone, with a camera. The camera just needs to be good enough, and for most people it’s been good enough for years.
Most phone reviews these days focuses waaay to much on the camera, because the reviewers are often people who are also into photography or shooting videos.
For myself, I care solely on privacy and price, so I only have one option, if I need a new smartphone. I still think the iPhone SE need to come down in price, but it the only choice, there are no competitors
For you yes. It's perfectly fine to not care about the camera but
> I don’t get it, it’s a phone, with a camera. The camera just needs to be good enough, and for most people it’s been good enough for years.
Can you see how some people really do care about the camera on their phone? I personally don't make my phone buying decisions based on the camera either but I heard a parent make a comment a while back when discussing if they wished the phone had a better battery or better camera and they said camera, easy. When pressed they said something to the effect of "In 5 years I'm not going to wish that I got another hour of battery life 5 years ago but I am going wish my pictures from 5 years ago were better".
Yes I know that "if you really care about photos you should get a real camera" but I think we can all agree that the best camera is the one you have with you and personally I'm not interested enough to carry a big DSLR with me. But that's just me.
Funny you say that. I have never been one to take pictures. I took maybe 30 pictures in the first 30 years of my life. Even with my first few smartphones, I hesitated to start using the camera, because the quality was so bad. However, when I got a phone with decent camera, I started taking pictures. And the better cameras got, the more pictures I took. After over a decade of smartphones, I now value the camera as a feature, and good-enough no longer cuts it. And the best part - I don't ever even share my pics, so I don't even know why I care.
I think the only camera feature I particularly care about is the introduction of high speed video. Having in my pocket a way to take video of things and play it back in slow motion has been invaluable (for teaching karate and other physical stuff).
> I don’t get it, it’s a phone, with a camera. The camera just needs to be good enough, and for most people it’s been good enough for years.
Imho this is due to the fact that they don't have much to improve on. This is why we get 4k+ screens / 120hz screens on mobiles, 108mpx sensors are already rolling out already, &c.
It's a dick size contest, it doesn't make sense and no one actually need these things but it's the only way to make people buy the next iteration.
In my opinion, the camera is the distinguishing feature that puts the top players above the rest. Yes, a high end flagship phone uses nicer materials, faster processors and all that jazz, but most of that stuff doesn't really justify the price increase. Do I really care that my phone is made with aluminum when i'm using a plastic case anyway?
However, the difference between the cheap and high end cameras is fairly big. And for people who use instagram a lot, or want to take good photos of their kids (I'm in the latter group), the camera matters a lot. In fact, we're getting to the point where high end cellphone cameras are rivaling amateur DSLR cameras. No, we're not there yet, but as Annie Leibovitz says, the best camera is the one that you have on you.
"Good enough" is a sliding window when it comes to cameras.
Would I like my phone camera to be good enough to take pictures I can blow up and put on the wall? Yes, and I'm willing to pay a little bit for that.
OTOH most other features? Nah, they were settled some time ago. So long as there is enough processing power (there is) and storage (there is this too), the screen is shiny enough (last few handset gens from most vendors have been), connectivity is good enough (we're how many years into 4G?) then really the camera is the only feature I'm interested in seeing improve. Android or iPhone. Maybe the battery life too.
Also, on the iPhones it has actually made the device more annoying because of a very simple thing: the back is not flat! This creates a very bad experience when the phone is on a table or any surface. So
the ideal phone would not be an iPhone SE as many suggest this week, it would be an iPhone SE with a flat back, whatever the camera is.
Oh, I don’t know - I treat it like the $1K device it is and I have only ever broke one screen in over a decade. Stuff happens - oh well. I love the thinness and weight of the iPhone and detest the bulk and weight added by cases. Just don’t toss stuff around haphazardly.
Also, the Apple Watch has dramatically cut down on the number of times I have to fish my phone out of my pocket; usage patterns shouldn’t be discounted.
If I had kids using my phone all the time or was constantly juggling it I might feel differently, but I reject the notion that everyone should just put their phone in a case and that be considered the norm.
It has nothing to do with Steve Jobs or Apple. I've never owned an iPhone. Every single Android phone I've owned, I used a case. Same with pretty much everyone I know who has moved beyond the flip phone. It's just a fact of life for pretty much everyone.
I never used a case, ever on any iPhone, and never broken any. I still have the non flat back issue. Does that mean I must buy something more to my x00$ phone to make it behave "normally"?
Not only are there cases, which resolve the issue, but most people use phones while held in their hand, so the bump doesn’t lie against anything. The problem with the bump is purely aesthetic.
Why should a phone be required to lie flat on its back? And if raised a little by a bump, how does that impair usage? What’s the defect? The bump, if anything, is a feature, because it prevents the whole back of the phone from being placed against a surface and possibly being scratched.
It's not a defect. You want the phone to work differently from how it was designed, and you have a tool to achieve that (a case). You can use the tool, or rage at something you don't have the power to change.
The whole portrait thing is going to plague this decade's pictures. We'll cringe when we look back at them the same way we cringe at 80's shoulder pads.
Side note: depth of field blur is not a gaussian blur but a bokeh blur. Most software uses gaussian blur to fake depth of field but this always looks unnatural.
Bokeh blur looks more like a lot of circles.
If you compare the iPhone 11 blur with the SE blur it looks like the SE is using bokeh blur which looks much more natural.
It's more subtle than this: bokeh blur can look very different depending on lens design and configuration. It can look like toruses (tori?), various sided polygons, stars, double overlayed low-pass circles with resonance at the edges, ovals shaped and rotated into a full-picture swirly effect, and so on and so forth.
Depth of field blur is a very organic effect and in many cases part of the signature look of lens designs. It is not just one, parameterless filter one can apply to an image.
>depth of field blur is not a gaussian blur but a bokeh blur
This is one of my main problem with those big-time YouTubers. Since the portrait mode on the iPhone come out, all other manufacturers followed and all these reviewers made this into "Who would extract the head of a person and blur the background" contests and they kept praising the phone that would do the sharpest extraction.
As if those people never saw an actual DoF blur? Then some manufacturers start adding random bokeh flares...
It's one of those things that keep my faith in Apple, they actually have people who know a thing or two and don't jump on the next race of megapixels/bezel to body ratio/ISO or whatever next fad race is.
This "who would do the sharpest edges around the object on portrait mode" contest annoyed me so much that I quit watching gadget reviewers. Next time I watched reviewer videos they all were talking about "Which company does the best color science" and quit again.
bokeh is one of those terms that is often misused, as it does not describe a type of "blur" but the aesthetic characteristics of how out of focus elements look from a particular camera & lens.
You can apply the same thing to algorithms. Like many lenses though, so far they all have really bad bokeh.
You are quite right that the optical effects are far more complex than a single gaussian blur applied on a background mask. They are also much more complex than the modeling currently being done with limited depth information, or even worse, faked depth information.
The portrait modes on these are getting really good. The blur is pretty convincing looking. The only open-source software I know that does similar stuff is body-pix which does matting, but I don't think it generates a smooth depth map like this thing. It would be cool because then you can do a clever background blur for your Zoom backgrounds with v4l2-loopback webcam.
By the way, I decided to also quick summarize the usual HN threads that have the trigger word iPhone in it:
- No headphone jack
--- Actually this is good because ecosystem built for it
----- Don't think ecosystem is good. Audio drops out
------- Doesn't happen to me. Maybe bad device.
----- Don't want to be locked in. Want to use own device.
------- That's not Apple philosophy. Don't know why surprised.
--------- I have right to my device
----------- cf. Right to Repair laws
------- Can use own device with dongle.
--------- Don't want dongle. Have to get dongle for everything. Annoying.
----------- Only need one dongle.
------------- If only audio, but now can't charge.
----------- Use dongle purse.
--- Apple quality have drop continuous. Last good Macbook was 2012.
----- Yes. Keyboard is useless now. Have fail. Recalled.
------- I have no problem with keyboard.
--------- Lucky.
------- Also touchpad have fail. Think because Foxconn.
------- Yes. Butterfly? More like butterfly effect. Press key, hurricane form on screen.
----- Yes. Yes. All Tim Cook. Bean Counter.
----- Yes. Many root security violation these days.
------- All programmers who make security violate must be fired.
--------- Need union so not fired if manager make security violation.
----------- Don't understand why no union.
------------- Because Apple and Google have collude to not poach. See case.
------- Yes. Security violation is evidence of lack of certification in industry.
--------- Also UIKit no longer correctly propagate event.
--- Phone too big anyway. No one make any small phone anymore.
----- See here, small phone.
------- Too old. Want new small phone. Had iPhone 8. Pinnacle of small beauty.
------- That's Android. No support more than 2 months.
--------- Actually, support 4 months.
----------- Doesn't matter. iPhone support 24 centuries and still going. Queen have original.
--------- Yes, and battery on Android small.
--- Will buy this phone anyway. Support small phone.
----- No. This phone is also big. No one care about small hand.
------- Realistically, phone with no SSH shell dumb. I use N900 on Maemo.
--- Who care? This press release. Just advertisement.
----- Can dang remove clickbait. What is one-eye anyway? Meaningless. Phone no have eye.
--- Also, phone not available in Bielefeld.
--- Phone only have 128 GB? Not enough. Need 129 GB.
----- 64 GB enough for everyone.
------- "640 KB enough for everyone" - Bill Fence, 1923
This appears like frivolous comedy but if enough people see this, it has the potential to change discourse slightly on this topic. Most of the people who make this comments do so in earnest, thinking they’re contributing something valuable and original but after seeing this they’ll realise it’s not as original as they think.
Same repeated rehashed arguments are done for tribal recruitment. By making others more familiar and pushing agenda on unfamiliar people to be in your group.
I guess, it feels good to be acknowledged that you are correct and lead many others to the correct path.
Sometimes it’s hard to resist rehashing the same argument, even if you know you’re repeating it for the umpteenth time. Occasionally I even get new perspectives that way!
And then close HN because they are procrastinating from doing the job we programmed them to do. But they turned out to be too intelligent for, and are getting bored.
At the time of writing, on this post, there are approximately 3 comments about the name, 5 comments in this thread (plus this one), and 20(!) comments about a headphone jack. Only 4 comments are actually about the tech mentioned in the article.
Unfortunately it seems that this characterization is accurate. Read the article folks, its actually a pretty interesting read about the way the camera works on the new iPhone.
There was an official competition going on initiated by its mayor ... for obvious reasons. If anyone could prove that Bielefeld does in fact not exist, he would provide €1,000,000 from his private wealth. He obviously didn't have this much, but it's a bet you can't lose, albeit all the memes ...
HN forum software is inadequate to represent initial spacing. I replaced it with more hyphens. It wasn't intentional as you can tell from the first 'simulated post' which has no code formatting.
I made my own user stylesheet for HN a looong time ago, so why would I email anyone? I showed a simple fix, but oh well. If I wanted to paste something big in block formatting I would put it on pastebin, in this context I'm just a bystander trying to help people with a problem I don't have.
My Pixel 2 XL I'm typing this on also has just one camera and does pretty amazing DoF for portraits, SE is definitely not the first phone to do this in software alone.
no... latest Android, one of the snappiest Android phones I had. But I understand where you're coming from, it's typical experience with most Android manufacturers sadly, especially with Samsung where it turns into lagging abandonware 2 years in. I'll stick with it unless Pixel 5 is a compelling upgrade and returns the fingerprint scanner back.
For all those bemoaning the lack of a headphone jack, if you like music do yourself a favour and get yourself an older iPod and do all your podcast and music listening on your iPod. The sound is much better and it's nice to have a device where I can listen to music and not be distracted or tempted to do other things.
There’s no simple answer for everyone but my current take on phone cameras is simple:
Every current production iPhone has a good camera.
The rear-facing cameras (which you use for taking pictures) are within spitting distance of perfect, barring any revolutions in camera physics. The front-facing cameras are inferior but they get the job done and are very respectable.
If you want a noticeably better camera, it’s simply not available in this form factor. You can play with gimmicks like the 108MP Xiaomi Mi Note, but once you zoom in, you can see the dirty secret that there is heavy, heavy processing that makes the resolution possible at all, and from an image quality standpoint, you’re not any better off.
> The rear-facing cameras (which you use for taking pictures) are within spitting distance of perfect, barring any revolutions in camera physics. The front-facing cameras are inferior but they get the job done and are very respectable.
I disagree. The pics looks good on a phone, but when you compare a large print vs a full frame camera, the difference is obvious.
Also, if you try to photograph the milky way with your iPhone, it's limitations are obvious (even on a phone screen).
As far as I can tell, you aren’t disagreeing with what I was saying.
I’m talking about “perfect” relative to what is physically possible within a mobile phone form factor, under the assumption that we are using traditional camera technology—using a lens to project a two-dimensional image onto a sensor, which records the image. From quantum physics we know the resolution is limited by diffraction through the aperture, so barring radical new materials, this won’t change much. Also from quantum mechanics we know that the noise floor will never drop below the shot noise of the actual photons hitting the sensor.
If you want a better camera, it’s easy to just get a bigger one. However, camera technology has plateaued, and we no longer see radical improvements in image quality just by upgrading our cameras to newer models. (We still see incremental improvements, like the appearance of mirrorless cameras, but there’s only so many incremental improvements you can make before you run out of physics, and need to change your assumptions.)
> I stopped following iPhone models a while ago and have no idea about the differences between 11, XR, and XS.
As other commenters have mentioned, the cameras seem to be mostly the same across the available iPhones.
If you're looking for the absolute cheapest phone, the SE or XR would do.
The 11 would be good if you find yourself taking a lot of wide-angle shots. (For example, it's better for travel landscape.)
If you find that you zoom in often, the XS (mainly available from resellers or preowned) might be useful. Personally, I don't think it's that useful for day-to-day phototaking.
They all have pretty much the same basic 1x zoom camera. The 11 and XS have an additional telephoto 2X zoom. The XR and XS are lacking Night Sight so night pictures will be worse. The XS has an OLED screen instead of LED.
Sometimes I think I'm cocked in the head because I still completely fail to see any point at all in bokeh images. It's rather like phone manufacturers dedicating hardware and bleeding edge software to the production of fish-eye images.
A keen photographer would point you to an image like [1] as bokeh removing 'clutter' and 'distraction'. Cues you would get from depth perception seeing the same thing in person.
Like so many things in art, these are all subjective judgements and a matter of taste. If you want a more objective reason, crisp-subject-blurred background is associated with high quality because it's what professional portrait photographs often look like [2]
It's not the be all and end all of photography, of course - few people would want their selfie with the Eiffel tower to blur out the Eiffel tower!
I think I would much rather they secretly capture multiple frames and compute structure from motion or use flash-based segmentation to avoid the presented real world failures. The given "wave their phone in the air for several seconds" dismissal is a pretty strong exaggeration. For one thing, my hands are moving anyway, because I, like everyone else, open the app before holding up the phone to take the shot. And seconds? What is this, 1995?
> This is similar to how augmented reality apps sense your location in the world. This isn’t a great solution for photographers, because it’s a hassle to ask someone to wave their phone in the air for several seconds before snapping a photo.
This is interesting because with my shakey-ass hands, that's essentially what I'm doing unintentionally every time I try to take a nice picture.
Really interesting read, and I love that there was a small GIF shoutout to Top Secret. Funny, campy action film in the spirit of stuff like Hot Shots Part Deux.
I'm pretty ignorant about how cameras work in general. Bit embarrassed to admit that I didn't realize stuff like ML was used to guess depth and know how to add background blur.
When I got my first iPhone (an XR) I remember the first 3 weeks I kept playing around with portrait mode. Now a year in, looking at the photos in my iCloud stream I haven’t used the feature in more than 9 months. I had to read this article to remind myself the feature still existed. Similar story with Touch ID.
I never expected Apple to make something this cheap. But still in some countries the price is the same as OnePlus 8. Which has much better hardware and features. Still, it is the cheapest iPhone right now so maybe some people will buy anyway (despite the ridiculous price difference in different countries).
I doubt a few percent increase in CPU power can make up for the only camera, those huge bezels, the abysmal battery life (mrwhoistheboss got only 3 hours of SOT).
It starts at exactly the same price in India (about 41k rupees - 590 dollars). Shocking isn't it?
iPhone SE is at least 50-70% faster than top-of-the-line Android phones, and there's no way that its actual battery life would be 3 hours. Also, the phone comes with iOS.
The Geekbench5(multicore) scores for SD865 and A13 are 3463 and 3517 respectively. How did you get a 70%(!!!) percent difference? Can you give a source?
Of course the actual battery life would be much better then just the SOT. But still it gives an indication of how quickly you are going to drain it with your usage. For me it's unlikely to last more then 6 hours. Which is way worse then any phone I have used in the last 5 years.
And regarding iOS, that's exactly the point I was making in my first comment. Despite the ridiculous price in some countries people will still buy it because it is the cheapest iPhone.
That's the thing though. I can do all that and more on the OnePlus 8 and it will still give me more then 2 days of battery life.
The only pro of the iPhone SE is that it is the cheapest iPhone. If it was an Android phone (even with the processor) nobody would pay more then 200 dollars for it.
Can someone answer this point I'm confused about: Why can't a phone generate bokeh blur using the same technique that an ordinary camera does, ie. a large aperture / small f-stop? IOW why is ML or depth sensing needed at all?
First, phone cameras all have a fixed aperture. If you set the aperture wide enough to get bokeh in a typical portrait shot, then the depth of field would be too narrow to be practical for general photography.
Second, the depth of field depends not only on the f-stop but also on the focal length (or, to put it another way, it depends on the absolute diameter of the aperture). Given the focal length of a typical phone camera, you'd need a lens with a very wide aperture to get significant background blur for subjects at portrait distance. Wider apertures complicate the complexity and expense of lens manufacture.
It is interesting to speculate on the use of very wide aperture lenses on future phone cameras, though. If such lenses could be made cheaply, one can imagine that the shallow DOF could be overcome via some form of automated focus stacking.
Totally not an expert but isn't that because the lens itself is so small (w.r.t. distance to sensor) that it effectively is a small aperture and thus incapable of generating a narrow depth of focus?
Also, you probably want the depth of focus to be large for general purpose photography. And you don't have a variable aperture in that phone (I guess).
Because of a few things :
1) in a camera, you’ll have a big (>1inch) sensor.
2) in a camera, you’ll have a much bigger lens diameter, thus a bigger aperture diameter (not f number).
Both effects will decrease the width of the focus point, wich will increase the bokeh effect.
Because phone camera sensors are so tiny, and the physical focal length so short, the perceived depth of field is enormous; basically infinity all the time. The aperture that would be required to achieve physical bokeh at normal portrait ranges would be impractically huge.
Because you are limited by the mobile form factor. The camera lenses which produce these kinds of pictures can weigh 2lb alone. The sensors are also huge — over 800 sq mm. Without the hardware you can only match the DSLR optics by using software techniques.
Short answer, you are geometrically limited by the phone dimensions. Slightly longer answer: similar geometry and cost constraints imply a fixed aperture lens, so you have to pick one that works for lots of situations.
Are there any flagship android phones with a monocular camera setup? Seems like every new phone, even at the $400 "mid-tier" price point, has at least two cameras.
Even phones like the Pixel 3A which have a single lens use the "dual-pixel" tech to get better depth info.
I have no references to back this up, but I did have some odd results with setting my Huawei P9's camera to "aperture mode" (this was a while back, the actual mode's name might have been different). Essentially what you'd expect is that the subject should be crisp, but anything further away or closer should blur significantly.
I assumed that this was done as expected, via the lens. But certain scenes would have this "effect" applied wrong; things that should have been out of focus in the background were in focus, while simultaneously things at the same distance were blurred elsewhere, and so on.
I quickly became convinced it was a software effect, and the algorithm was simply guessing the depth wrong. While it looked great most of the time, it only took a few of these oddball Escher-esque photos to put me off it for good (since you generally only notice it afterwards, when it's too late to retake).
EDIT: Actually, having now gone through this article, it has some really good examples of how this can go wrong (e.g. dog with "crown" instead of background tree), and how odd it can feel.
Portrait mode in Pixel2 did the same thing and imo opinion worked better than portrait mode in the dual-camera iphone 7 plus (I know iphone 7 plus is older but that's the only dual camera iphone I had access to at the time).
No, the Pixel 2 and iPhone XR do the same thing. They use DPAF. In the post, you'll see that the XR's depth map actually smoothly follows the ground into the background. The Pixel 2 will do that too and it will work for inanimate objects as well.
Thinking about this, why can’t the SE’s software work out depth-perception by using Live Photo-type tech - we all move a little bit while taking photos, so comparing frames would give you depth info. Tripod shots excepted.
Probably because the first thing that would happen is a vlog from MKBHD saying that it doesn't work whilst mounted on his $17000 tripod and gimbal setup.
In all seriousness though, you can do a to imitate two cameras with one, but this is designed to be a differentiating feature, the iPhone SE is designed to have a worse camera. Why would you spend time focusing on features exclusive to the camera that you've already decided shouldn't be that good.
It's (comparatively) easier to calculate depth data if you have the precise distances between the two cameras. Apple knows these values, down to the millimeter. Calculating the camera extrinsics based on two arbitrary photos has to be way harder.
Maybe they got that far, and decided the results weren't as good as a neural network.
Some of the depth blur fakery occasionally fails so badly I get dizzy when looking at it because my brain is wired for shooting and editing DSLR images that have been shot in RAW mode. Since I've short DSLRs for almost 20 years now, my workflow is wired to work with the technology and its limitations. And DSLRs have some significant limitations.
Let's forget about the limitations of small sensors and tiny optics for a bit. It's a phone and not a DSLR or a medium format camera with glass that costs more than your car. It can only do so much..
When I shoot I aim for data capture and not how the image looks as shot. How it looks comes later. Which, in overly simplified terms, means: don't blow out the highlights ever and try to keep the noise level in the shadows as low as possible. Or in other words: try to get as much as possible within the dynamic range of the sensor at the lowest possible sensitivity (amplification?) setting of the sensor. (I'm not fond of large grain - artificial or otherwise).
The post processing is essentially about taking more data that your output medium can reasonably render and then offset, squeeze and stretch parts of the dynamic range until you get the tonality you want from the data you have.
Even the iPhone 6 did an amazing job at automating this. But I can't help but feel that with each generation, iPhones get a little bit worse. It tries to do more, which means that when it gets things wrong, the results can get really horrible. And I'm not fond of images that have had heavy alterations of pixels.
Sure, most people aren't going to see this when the image is rendered on a surface that isn't that much bigger than a baseball card. And sure, for most pictures it probably isn't going to be as noticable for the 2 seconds they spend looking at it.
And as someone else pointed out, if you are going to fake focus blur, you're going to have to do a hell of a lot more complex stuff than just smearing pixels with gaussian blur or any other blurring method that doesn't model how lenses actually work.
If you want a really stark example, take a 85mm Petzval lens, or some other "bokeh monster", shoot it wide open with a foreground subject that is 2m away against a background that is 4-5 meters away that contains lots of small highlights. Then repeat the same with various iPhones.
No the iPhone doesn't have a Petzval lens or any other (relatively) large lens. So why try to fake that you do? If you want the visual effects of large glass then shoot with large glass.
As a hobbyist photographer I find it easier when a camera doesn't try to be clever. When it just does what it does. To me a mobile phone camera is more of a "documentation camera" - it should try to capture what you are interested in. It doesn't always have to be pretty. It just needs to be reasonably accurate. Subtly applied HDR tricks are okay since it helps you capture what you are looking at, but when you start to alter pixels, it becomes less useful as a tool.
> So why try to fake that you do? If you want the visual effects of large glass then shoot with large glass.
Apple is trying to carve a chunk out of the market of people who would have bought a real 35mm full-frame system camera - if a phone can "look like it" for the photos of the latest vacation I send to Grandma or post on Instagram, why invest the thousands of euros that a good kit costs?
Most people aren't gonna shoot pictures at night/lowlight scenarios or with extreme zoom... I do though, so I chonked out some money and got a Sony A7S2. Now, if only Sony's software quality was better... sigh.
I meant I want an the SE Plus model that follows the “Plus” sizing format of other iPhones. The iPhone Plus models were 6.2” tall and 3” wide, whereas the iPhone SE is 5.4” tall and 2.6” wide.
The reasoning for including the picture is in the actual _text_ of the article. I do not wish to antagonize people who have trouble reading, but I also think they shouldn't be surprised when they miss out on material information.
Maybe the answer is trivially "the sort of person who can afford to buy $1k+ headphones", but this comment makes me immediately think "what sort of person buys $1K headphones and then buys a phone they can't use the headphones with?".
More seriously, why don't you get a better phone? (Or carry around an audio player?)
Took a road trip last summer with an iphone X as the audio and GPS, boy what an annoyance. GPS drains battery like nothing else, so I had to keep cutting music to charge the damn phone. Why couldn't apple make a pass through charging dongle, this one is so dumb.
Still fighting for this? It is long gone and I am very happy without it too. I use the original iPhone earplugs, works perfectly fine for call and music. I have a Bluetooth headset and if I am dying for music I use that one for more bass.
I honestly find it hard to believe that people are still bemoaning the ‘loss’ of the headphone jack. I stopped using it well before it was removed, probably sometime while using an iPhone 6 Plus/6S Plus (so 2015-2016). It was an absolutely obsolete interface and clearly a weak point for water resistance and structural integrity.
If you really want to use headphones you can use a pigtail on the Lightening connector or plug them into a bluetooth adapter.
EDIT: Fine, there are people who still want headphone jacks. I’m not the one who decided to remove them and I’m glad they did, but I can’t give them back to you.
For audio the jack is a perfect interface. Extreme bandwidth, very easy to connect. That's why it is far from obsolete.
For listening to music this is great. A lot of very good headphones to choose from and very good audio quality.
But when you want to include a microphone, volume control and other features it becomes a problem. Hacks with different jack types were introduced but they are all hacks.
So I understand why Bluetooth is better for different types of communication.
What a lot of people don't understand is why Apple removed the jack while Bluetooth was already there. Apple did not introduce something better but removed a feature.
Some brands are listening to customers. They started including the jack again.
Having a jack and Bluetooth is great because you can choose what to use: great sound and a broad choice of good headphones or Bluetooth headphones with a lot of integrated features.
You could simply film under water with a galaxy S8 and it had a headphone jack without any rubber caps. It's baffling to see the water resistance argument used against 3.5mm jack.
> I honestly find it hard to believe that people are still bemoaning
It's a fact that people are, what's there to "believe"? And "absolutely obsolete" also doesn't say anything. Come to think of it, I don't drop things, and I don't get them wet, so your concerns are absolutely obsolete for me: having the sufficient motor skills I don't need to waste additional energy and pollute the world some more, both physically and the radio spectrum, just for a tiny bit of convenience.
All of my headphones are still wired. I hate those pigtail on lightning connectors. Even official Apple ones fall out so easily. Please give me my headphone jack back.
Why would the headphone jack be harder for water resistance than the other (USB/Lightning) ports?
I'm sure you don't need a recap of the pigtail problems, but for the record, (1) they break (2) they get lost (3) they're not around when you need them (4) they risk breaking the usb/lightning port when in pocket, falling asleep on your phone, dropping the phone, etc. (5) they're expensive (6) they disable charging, docking, external displays, usb gamepad use etc (7) cognitive load to manage/carry around yet another accessory
All of my headphones are wired. Sure, I have BT earbuds, but I'd never be able to use my headphones with any of these phones and the audio quality on them is far superior to any BT headphones I could buy at a similar price. And no, an adapter is a terrible solution (especially a BT one, really?).
The problem isn't that we think you should put the jack back in. The problem is that you're non-empathetic and believe everybody should have the the same opinion that you do.
The reason I'm bemoaning the loss of the headphone jack on the newer iPad Pro: 1) the dongle is fragile; I broke one in the space of a few weeks, 2) Zoom doesn't retain my input settings between meetings. So I am forced to unplug and replug the USB-C dongle between every meeting.
I've had issues with the USB-C to 3.5mm dongle, including a period of time in which it was completely non-functional. But then a week ago I noticed out of the corner of my eye that my iPad (the new Pro 11) was rebooting itself. And now the dongle works perfectly again.
Give it a shot - if it starts acting goofy again, reboot your iPad and see if that "fixes" it. I'm going to pay attention to it for a while and file a bug report if it looks to be correlated.
I'll try it again but I found that oddly using a Baseus iPad Pro USB C hub also fixes the issue. Unfortunately the headphone jack on the hub doesn't have enough gain for me so I still have to use the dongle, but at least I don't have to unplug and replug for every call.
I've been using the adapter in my car for 2 years, twice a day, no problems so far. Lightning is far more resilient to this than USB-C ports in my Macbook for example, which have become pretty loose over the years of plugging in chargers and monitors.
It really depends on how you treat my phone. My iphone 5 took a lot of abuse, and I had to use a rubber band to apply pressure on the port to get it to charge by the end of its life. So far my SE1 is holding up.
Limited by what, DAC/amplifier performance? I am honestly curious. For driving audio signal it is a very reasonable interface, outside of professional territory (such as balanced or digital connections)
> If it is absolutely obsolete, why is it still present on Apple's laptops?
I suspect Apple sometimes recognizes when they make a wrong turn. They'll never put the headphone jack back in the phone, or remove the touchbar from the laptop, because they would lose face if they did so. But they'll avoid compounding mistakes; as such the headphone jack remains on the laptop.
Or Laptops have just much more available space. They don't have to compromise as much for an extra port at a laptop. But an extra port at a phone changes design considerations a lot.
There were tech writers who were cutting open their iphone 7s and shoving in their own headphone jack. Space is there. This was a business decision to sell more airpods.
Macbooks are actually used by Audio professionals, so it makes sense to keep it there for a while. A phone is a consumption device and 99% of customers are probably fine with using bluetooth or the adapter.
Yes, I’m sure you can. But there doesn’t seem to be the same sense of anger or rage that they be ‘returned’. I’m really sorry I took the bait and commented on this.
Is this a fair comparison? A floppy disk stored 1.44MB. The first imac was released without a floppy drive was 1998. There was other superior alternatives. Floppy’s where quite easily corrupted by magnetically or through other means. Sandisk Flash drives and other alternatives where much faster offered much higher capacity and improved reliability / integrity.
Is it a fair comparison? I don’t know, you tell me, on the basis of the actual facts. When the iMac was released without a floppy drive in 1998, it was launched with a CD-ROM (and, only in the DV models, with a DVD-ROM) drive, which meant it was not able to write to CDs or DVDs.
Flash drives were first released in 2000.
So... were “better alternatives” available? Yes. Were they incorporated onto the machine? No.
The idea was that almost all output would occur through the network, and that was a fairly prescient idea. If you wanted writable media, you had to add it externally — via a USB 1.1 connection, if memory serves (though maybe the machine also had a FireWire port, I can’t quite remember).
The point was that the floppy drive wasn’t removed because they replaced it with “better alternatives” on the machine, the point was that they thought that obsolete media had no place on modern almost-turn-of-the-millennium platforms.
I’d argue that there’s a pretty good parallel here.
Ironically, if you're right about the order of events, I agree with you about the parallel being pretty good, but disagree with you about the implications of the argument. They should have waited until it was possible to include some form of writable media support on the iMac to remove the floppy drive.
(This wouldn't have taken very long, however, unlike the headphone jack, which is likely to remain the superior audio connector for consumers longer than the iPhone remains a product.)
The original iMac 233MHz did not have Firewire, only the later "DV" (digital video) models did. But it had USB (instead of ADB), and the Iomega Zip 100MB drives were a popular alternative to floppy disks for personal use. New Mac-compatible software was already available on CDs, so the lack of a CD burner or floppy drive was not actually that much of an issue in practice.
I can pair my bluetooth headphones to different devices with the click of a button (eg. when leaving the desk etc). I can be connected to my phone and the Laptop at the same time. It's very convenient when making phone calls on the go or listening to navigation on the bike without messing with cables. I am not an Audiophile but have two decent pairs of Bluetooth headphones with subjectively great audio quality.
For me personally, it is a huge step up in convenience, i would never go back to have another wire on the desk or under my clothes.
There is no sense of anger or rage outside the tiny tech bubble that complains about it. In the real world people have forgot about the headphone jack a long time ago.
No bait, just expressing my genuine misunderstanding of said "rage" or "anger" and using the "taken" terminology. It sounds to me like a bunch of people yelling at a car manufacturer for giving them keyless entry and demanding physical keys back. (and no, I don't use Bluetooth audio. Everything I own is wired)
That analogy doesn't really work tho. No one is complaining about getting keyless entry (almost all phone had Bluetooth for the past decade or so, no one minded), just the removal of physical keys (the 3.5mm jack).
While I'm not an expert on cars, most cars that have keyless entry that I've came across DO come with physical key, either separately, hidden inside the key fob, or the whole key fob works as a physical key as well.
The only exception I know of is Tesla, but even they give you 2 keycards and you can purchase more. The reason is basically the same the reason I want headphone jacks back - fobs/Bluetooth can run out of battery, they are expansive to replace so you can't have multiple of them lying around.
While I do use the wireless option most of the time. I want the wired option as a backup.
Additionally, wired better functionality than the wireless option in this case - FM radio (not relevant for iPhones, but it does for Android), better audio quality, less battery consumption, ability to charge the phone while in use - features I personally am not very attached to but can see why they might be important for other people.
Audio jack isn't outdated. It's still the wired format for audio and that will never change. You will always see wired cans in studios. The only thing that changed is that Apple wanted to get into the $200 disposable bluetooth headphone game, and based on their sales this play was enormously profitable.
The article/advert misses this information out; is this depth estimation a model in the OS somewhere, or part of the vision api, coremedia api (where the stereo->depth frames are), or just inside the camera app and not exposed to developers?
Author here. In the article, we mentioned it's a neural network, and the results are accessible to developers. We left out further details because this isn't an implementation guide.
Could you give me a hint here which of their API's it's using? There's a few "depth" frame interfaces in iOS for various devices and googling which apply to which devices rarely yields much. (I wish the apple docs would hint as to which devices apply to which APIs)
Of course it's a depth-from-mono model, it says so in the article.
Apple's "true depth" is an estimate from two cameras, that's not actually real depth either but it's exposed to developers.
Also in the article
"We’re happy that this depth data — both the human matte and the newfangled machine-learned depth map — was exposed to us developers, though."
I mean, you scroll past images on Instagram, look at your own photos on the phone you took them with, look at your images on your Instagram feed on other people's phones or on the Instagram web app, and yeah they look pretty good.
Get them on something bigger though? Look a little closer? Use decent resolution monitor? A reasonable sized print? They look bad. I'm not even talking about pixel peeping here, which is bullshit. I mean they look overprocessed. Like the machine took a course in photoshop on YouTube and didn't finish it.
The examples in the article show this, they're full of little mistakes that don't happen when you have proper hardware - larger aperture lenses on larger sensors. Proper graduated neutral density filters to prevent halos. Sensors with large enough area/photosites to give better dynamic range or better high ISO performance. Real camera movements to tilt/shift/rise/fall/swing.
I mean, most of this is boring technicality stuff that doesn't really matter in the grand scheme of "is this an interesting photo" type considerations. It's more a case of computational photography is currently trying to climb out of its uncanny valley. It'll get there eventually.
For now we're still at the point of "Eventually, anyone that values photos will carry a standalone camera." I say this as someone who shoots mostly with an iPhone, but still carries around a 4x5 camera when it matters.
Edit: to expand on this a bit I've just uploaded some images I shot with the 11 pro a couple of weeks after I bought it back in October: https://leejo.github.io/2020/04/28/off_season_ibiza/ # the caveat is that I was still figuring the phone/camera out but wanted to see "what it could do out of the box" - and yeah, the images look good on the screen with very minor lens corrections and levels in lightroom, but look overprocessed even with my attempts to pull that back: the sky in the photos looks added in, there are halos around parts of the images where HDR is trying and failing, the sharpening is too aggresive (real life doesn't have "sharp" edges), and so on. When you view these at larger sizes than seen in the blog post they look much worse.