1. The fake bokeh looks surprisingly good. I don't think any of my non-technical friends would be able to spot it on their own.
2. These pictures are absolute best-cases for the iPhone (or any) camera. For headshots with well-chosen natural light as in these pics a picture taken with an iPhone 4 would look almost as good. "Studio lighting" and other tricks won't fix bad light. Neither will a $6,500 Nikon D5 with a $1,000+ lens.
3. Nikon, Canon et al need to try harder. Cellphones have already replaced several categories of "real" cameras, and they keep improving every year.
Just look at these cropped images over a span of 4 years:
They got disrupted and like most incumbents, they failed to capitalize on the disruption "because it could eat their margins" or whatever their reason for not getting into the smartphone market was.
I call BS, if by "amateur photographers" you're talking about enthusiasts who care about composing and exposing a great shot, not just taking a nice picture of the kids in front of the Christmas tree. It's like someone in 2012 saying "at the rate smartphone input apps are progressing, in 5 years no-one will buy physical keyboards for their computers."
The reason someone carries a DSLR today is because of the optics, the speed and the control mechanisms. Having shutter speed, aperture, white balance etc. at your fingertips when shooting. Having the ability to switch from a 105mm portrait lens to a 20mm fisheye. Having that rapid autofocus and response time that lets you capture great shots. Having 14 bit RAW files that you can post-process to save that reflex once-in-a-lifetime shot that was underexposed. Having the ability to mount a flash off-camera (or even three strobes). Those are things you can never get on an iPhone or Samsung.
Nikon, Canon etc. have already "bled dry" of customers going to smartphones instead of DSLRs, and at this point their business models look fairly stable. The share price of both companies has also been fairly stable for the past 3-4 years; both are up 30% over the past year, although that (and a lot of their volatility) is tied to the JPY:USD exchange rate.
I can see that in 5 years smartphone speed, coupled with advanced phase detection AF, will make smartphone as fast as today's pro DSLR bodies (the likes of the D5, Canon 1D line etc..)
The need to quickly adjust settings(f/stop, shutter, etc..) can be greatly diminished by having even more advanced software. Like a bokeh/portrait mode, where the lens is kept wide open and the phone maintains all other settings, include fake bokeh via depth map.
Different lens focal lengths is a big differentiator. However, if the sensor has sufficiently high resolution, the software can crop the sensor up an extend to largely simulate telephoto. Granted, compression won't be achieved, but it's largely there...
5 years is a long time.
The software will in this case be inventing an image, not capturing one. Don't get me wrong - I'm delighted with the improvement in quality that pocket cameras, embodied in smartphones, have achieved in a mere decade. But physics is still physics, and a 6x5mm sensor can only capture so much light.
I love using my old film cameras, and as an amateur, I actually only use an iPhone and film cameras.
I don't really see a point using a bulky and short lived dslr to produce nor quick nor great pictures.
As a professional, I guess the issues are different though.
This can be filtered to some extent - indeed modern phone cameras already do so quite heavily, to overcome the limitations of their already small pixel size and already high sensitivity. But doing so costs the same detail you'd need to try to invent a "simulated telephoto" image, which would give your fake telephoto process even less to work with than otherwise - guessing you have a neural net in mind here, and while I'm not about to argue that a sufficiently well-trained net won't produce some kind of result given an input of sufficient similarity to its training set, I see no reason to expect that result to bear any particularly photographic similarity to the original input.
I mean, don't get me wrong - what you seem to be suggesting isn't all that dissimilar from how we currently understand the human brain's own optical system to work. But I don't think it is especially likely that many such brains will happily accept another neural network's best-effort guesses as the output of a process that we've all learned to expect will give us representations as precise as is within the capabilities of the devices we use to make them.
Increasingly, there's no reason for that type of user to buy a DSLR. A good phone will handle 90% of their uses for a lot less (incremental) money and effort.
But I agree that there's still a huge gap between people really using their DSLRs as DSLRs and smartphones. And, as for cameraphone accessories, the fact that you can stick awkward add-ons onto a phone to make it a better camera, doesn't mean that you should or that it's something most people want to do.
LOL, I was not sure of which usage of optics was intended here, had to read the whole sentence.
But, if I'm paying them to do a job, I expect them to show up with gear that's more or less the professional standard for what they've been hired to do.
Everything else is processing and software. Processing on smartphones is progressing WAY more quickly than DSLR. Takes Canon a couple years for each Digic processor generation. New smartphone image processor generation each phone cycle.
And maybe not a fisheye, but 360 degree camera attachments already exist.
My gut is many other things either are or will be wrong soon.
Why couldn't you have adjustable white balance, rapid autofocus, high response time, high bit depth, external flash, etc on a smartphone camera? These seem like things which should be possible even with today's technology.
- if you need more parts, eg. flash, you go from having one thing in your pocket to carrying a bag anyway
- dedicated UI and buttons for adjustments while shooting without having to look away from your subject or having to change your grip
- much larger sensors means more light to work with during shooting and post-processing
- dedicated glass that you can't quite yet replicate with light-field tech
The smartphone can do many of the things a dedicated camera can, it's just not as good on almost all fronts, and much worse in some aspects. You can under more and more conditions get images that rival DSLRs, but not ALL condititions: If you can control time, light, and subject all at once, a dedicated camera can be matched. If you can't control of only one, grab a dedicated camera.
The tactile UI is one major gripe against smartphone photography for pros and enthusiasts alike, which is why a phone to some extent needs to be smart/automatic, and while today's image sensors straight beat the pants off any predecessing image sensors, physics still poses hard limits on noise and light capture.
Today's image sensors are very close to be able to count individual photons, and making the sensor larger means being able to capture more of them at a time. Tricks are being worked on to extend dynamic range and lower noise (like double exposure HDR), so image quality still increases, but the larger image sensors profit from those developments just as much as the small phone ones. The days of small image sensors being good enough to beat a human eye are still far off.
TL;DR: dedicated tactile UI, physical interfaces, physics, can't quite be beat by all the high-tech we can pack in a smartphone package.
You probably eventually will.
Probably physical limitations. DSLRs have lots of space and dedicated hardware for making this fast.
>high bit depth
Sony does, and a number of smartphones use Sony sensors.
Note that AI is not being used to (simply) mimic a better sensor and lens, there is all sorts of stuff going on in the algorithms that a photographer would do in photoshop or in the dark room.
The problem is, there is a specific aesthetic being targeted, and this removes some of the artistry from photography.
I think there is a fundamental difference between a) the camera capturing multiple depths of field, focal points, etc., and then allowing the user to make the final decision in post production and b) the camera computationally simulating lens effects and lighting effects in the way that snapchat filters widen eyes and add animal ears.
Cameras are supposed to capture reality, not create a postcard-like view of whatever was in range or generate a flattering selfie.
These reviews should not be called camera reviews, they should be called "image algorithm reviews".
What's next, phones whose "microphones" make our voices sound more masculine or flirtatious?
I think that's exactly what many, many people want from their camera, and I don't see why it shouldn't be up to them.
Whether this is a good thing or not isn't a new question. My answer is, it's a normal and OK thing. However (as before) it can be taken to extremes or used for fraud. Obviously, these aren't good things.
It's new, too, so it will take some time for us to learn the limits of good taste and good judgement and for norms to develop.
The thing is people can usually tell if it's extreme enough. After all we see each other in person sometimes still. :)
This seems like an ironic fantasy to me, because I don't think there is such a thing as an objective reality. Human vision is not objective in the slightest. The objective reality we think we see is actually a fiction made up in real-time by the brain (https://www.ndtv.com/offbeat/what-colour-are-the-strawberrie...).
Some of the landscape photos and portraits shown are difficult shots that professionals can achieve after understanding lighting. Using a filter to simulate this is fine. I don't judge it. I am not a professional photographer and use filters some of the time.
The issue is calling it a review of a "camera" when it is really a review of filters. Debating over which company's fake bokeh is better is like debating whose animated kitten ear filter is more lifelike.
I understand that's not the same for everyone - but I enjoy the idea of my phone making the "pop music" of digital photography. Hell, I can't even zoom in without compromising quality. I just use it to document.
I worry about that too. It may also end up being a future where we are no longer fooled by fake effects and they start to feel inauthentic.
Chances are when plastic were new, people remarked at how similar chrome-painted plastic was to actual metal. These days we can easily tell them apart.
Glamour Shots - the Analog algorithms of beauty.
If I want to shoot Sony Alpha SLR and edit the raws, I'll do that. But most of the time I just want to share what my family's doing with the rest of my family and this makes it really easy to do it well.
Where is this? Which OS? I believe the composition optimization is not something you can turn off.
What's the difference between optimizing the landscape composition and modifying a composition of a face to make it look more friendly, or modifying a picture of arms to make them look more buff?
I wish iOS would allow native DNG (RAW) captures with their camera app. They added HEIC but not DNG? It'd be so much faster to snap a pic and capture DNG with the native app, rather than firing up LR Mobile / VSCO, etc.
Neat new features though, like that "slow sync". Why don't older models get "slow sync" though? It seems like something that is controlled by software
Except that, the point of a review is to go into details - in this case to outline the ways in which it has improved, and how much each of those has improved
That and Apple make their money from selling this hardware, so having software that only runs on new devices helps sell them.
People with an iPhone 4, 5 or 6 have more reason to upgrade to an 8 rather than a 7 than just the camera.
Sort of like a GPU, you could emulate what it does in software, but it'd be prohibitively slow.
Of course it is.. it was announced today.. :|
It doesn't mean if the iphone8 was designed to be 2x thicker, Apple could have done much more with the camera quality.