Thanks to ML I can get footage at 2.5K in 12-bit RAW, or pseudo-4K in an anamorphic format. If you're a film student in a small budget, the amazing work from the people behind the project it's a gift from heaven, as the quality it's almost film-like (ready for regular LUTs).
Basically you can have RED or BlackMagic Camera color and HDR kinda-quality with a small budget... and a lot or patience and free time.
Still tho... I would not use ML RAW for a client in production projects (like a wedding or commercial), as most builds are in a "Always-Beta" state, with a lot a bugs and slow editing workflow for a professional setting. So in those cases, I would recommend record in old h264 in a flat profile, and use all other ML features like zebras, and live histogram.
Kudos for the ML Devs, who against all logic, are giving ton of work for the indie film community.
When Canon released the EOS M as its first mirrorless camera, it hasn't gone down well in many reviews as it's a slow performer. Now, this has actually been addressed in a later firmware upgrade, but at that point the damage had already been done. After that, Canon basically dumped this camera on the market at very low prices.
Good thing for us is that this little camera can now be bought very cheap second hand (got mine last year at around $100)
Once you load up ML, you get a fantastic fun little camera! I'm not very experienced with shooting video but for photography, it's a wonderful experience. Focus peeking, magic zoom, interval photography, all for free.
Many kudos to the ML devs from this side as well!
I needed to do some video work last year, so I basically followed Casey Neistat's suggestion for a Canon camera, only to find out it wouldn't allow me to export the video live via an HDMI cable without the camera's overlay on top. Turning all the elements off still left me with a red dot in the corner. Apparently I got the wrong model and the one $800 more expensive would do this. After a while I gave up, went back to the store and got a cheap video camera that could do it, BUT again, I had to get the more expensive version, because the same model in the cheaper option didn't allow this particular use case.
I was really surprised by how backwards this all was! Proprietary OSs, artificially limited functionality, anemic processors, interfaces out of the 1980s. It's crazy!
Has someone slapped Android on a decent camera yet?
And yes, the high-end full-frame DSLR costs are on par with high-end laptops, but IMO you get 90% of the benefit for a fraction of the cost with an entry- or mid-level crop sensor.
As for the UIs, I'll definitely acknowledge they could stand some updates, but the way you describe it sounds like an overstatement. What would you improve, exactly?
In terms of the UI however, it's definitely very out of date.
When I upgraded my camera, one of the advertised features of the new one was that they added an extra digit to the internal intervalometer. Instead of being able to go between 0-999, the new one could go from 0-9999.
The fact that this couldn't be a simple software update still astounds me.
Basically slap a fucking android phone on the back when browsing photos. Make it a separate CPU if necessary.
Embrace the 'casual camera' body style more. The viewfinder should not be a big chunky thing in the center, but on the side and a pop up if necessary. Get rid of grips and make them a uniform rectangle, and add a battery grip in the box to make it slr grippy. People are scared of slr style bodies, while they are not scared of casual camera bodies.
They will continue serving the standard pro market who want the big chunky slr style bodies, this is how they can capture the 'high quality life memory / instagram influencer' market better.
Canon, Nikon, and others have long decided they can’t and won’t compete with smartphone makers and are increasingly moving up-market to the prosumer and professional markets who want “big chunky SLR body styles”, and they use specialized software like Capture One and Photo Mechanic to view/edit/organize photos anyway.
These days when the shot is easy the phone is at least in the ballpark with the big guys. When the shot is hard the phone isn't even in the running.
It ran full Android, you had all your file management, Bluetooth and wifi, could play games if you felt like. Could encrypt the filesystem, too, which I think is super important for journalists.
Basically nobody noticed or cared.
I have a Panasonic and the OS is pretty freaking great. 10 to 20 minutes of poking and around and maybe a few hours of real world usage and I can customize my own buttons and quick menu for what works for me. I don’t want some bloated POS andriod iOS apps on my camera.
As far as the hardware is concerned. Yeah, there is a lot of artificial segmentation in the market. I think this is why the camera market is failing badly.
The menu UI is somewhat clunky, but does enable function discovery. There are no Nikon-specific swipe gestures to learn, for example.
Funnily enough this is a feature of Magic Lantern which is especially useful if you want to use a 3rd party HDMI recorder to record longer than a SD card or the default OS would allow https://wiki.magiclantern.fm/faq#how_do_i_record_for_more_th...
Samsung did this: https://www.dpreview.com/reviews/samsung-nx1 but gave up the market in 2015.
That was likely a good idea from a cost-benefit perspective, as standalone cameras continue to decline in unit volumes: https://www.dpreview.com/news/7719403699/cipa-s-november-num..., but it's a shame from a technological progress perspective. You are totally correct that a camera body really ought to be an Android device, but camera makers have not done that—which likely contributes to their fall in unit sales.
Having a Nikon and being able to shift shutter speed / aperture with a dial is very important.
You can kind of think of this as a remote control. Yes you can have a touchscreen as the only interface for a remote control, but you always have to look down at it to use it.
On my remote I can control volume and channel without looking at it.
A full frame sensor can easily cost $600-$1000 without even considering anything else, yet these cameras are in the $2000 range and include highly sophisticated stabilization systems and professional video features (e.g. 4:2:0/4:2:2 output). Those are typical to low margins for a technology product.
Regarding your complaints. Realize that higher end cameras are meant to be a professional tool. Thus they tend to favor conservative reliability and interface continuity over anything else. The camera app crashing like it might on your Android phone is unacceptable on a camera, so they run more reliable and specialized code. Though speaking of Android, Sony cameras do run a modified version of Android, the previous few generations of cameras had somewhat of an app store, and people even got hacked APKs running on the camera to add missing functionality.
I do agree that camera manufacturers are purposely leaving out software features though. Stuff like an intervalometer and focus stacking should be standard on every camera. Plus, as you mention, some vendors are hobbling features to try to get users to buy more expensive cameras (like a 1.7x crop for 4K video on Canon, because they really want you to buy a C200 @ $7500).
To expand a bit more on interfaces, there is an interesting argument for and against it. It is no surprise that the camera industry is shrinking, and that the people most likely to buy new cameras are going to be those that already own an interchangeable lens camera. Thus manufacturers don't want to upset their customers by changing their interface too much. Companies like Hasselblad that produce more specialized luxury cameras have come up with more modern interfaces for their recent cameras. While I would personally love to see Canon/Nikon/Sony step up here and create a proper, modern touch interface for these cameras, they won't anytime soon. In everyday use, most of the interface with a camera is with physical controls, at most your common interactions with the software is changing options that are part of a quick menu.
You don't need the latest and least supported by third-party tools camera for most things. A $500 body with a $100 prime lens will outperform a $1500 camera with a kit lens.
It functions like an instrument that is designed to be used. Treated with reasonable care, from -30C to 100%-humidity-condensing conditions at the freezing point, it has consistently gotten the job done. It is compatible with a huge range of glass, and it all works. The interface stays out of the way -- when I'm making images, I'm never worried about the camera, just the photography.
When I'm working, the camera is always powered on. The battery lasts for weeks in that mode, yet when I depress the shutter halfway, it begins to lock focus in tens of milliseconds. With gloves on.
When these cameras are released, there are rarely, if ever, hardware revisions. They work that well on the day that they are shipped to the first consumers.
For me, the cost (and the value) is in the R&D and the ecosystem.
I suspect the problem here to be scale.
I suspect that those are above a $300 BOM given the display, sensor, and plethora of mechanical things (buttons, threads, etc.).
Mechanicals are very expensive compared to electronics. And cameras have LOTS of mechanical thingys.
If Canon sacrifices any of that for "extra" features, the pros leave for Nikon. And vice versa. These are still cameras first, everything else second.
The bigger/faster sensors are a big part of the price. The other big factor is all of the buttons and dials to quickly change settings without having to look away from the scene. The fast shutter, mirror, and optical view finder are also big contributors. Look at the assembly animation below  of a Canon 10D to see all of the tiny parts packed into a small, ergonomic package, and you'll get a better idea of why they cost so much. Not to mention I've killed 3 phones by drops no higher than 4 feet, while I've dropped my DSLRs many more times and from higher points and they show no signs of damage.
It's not about the software, it's about the build quality (once you get into the professional product lines anyway). I have an old Canon 7D that's damn near indestructible (think old Nokia-phone style). I'd dropped that thing in lakes, had it out shooting during monsoons and snowstorms, rolled it down mountains, buried it in sand, dropped it from serious heights (shattering more than a few lenses along the way) and it just keeps ticking. The body is pretty banged up and I've had to clean/replace a few internal parts from the wear-and-tear but it just keeps on ticking.
I have several replaceable-sensor cameras and the bodies/lenses were made before I was born. They still take great pictures because it turns out that focusing light onto stuff is largely a solved problem. (They don't even use batteries, much less need an up-to-date OS! But I will admit that carrying a separate light meter is a little annoying.)
Yes, there will always be people that need a 1 billion point autofocus system, and that will require some heavy software engineering. That is a niche use case, even more niche than needing a digital camera to begin with.
One thing that I'd love to figure out is how to have ML control the timelapse, but have the camera signal to an external motion controller. Usually, the motion controllers want to control the camera so everything is in sync. However, as ETTR increases the shutter time chasing a sunset, the motion controller needs to know to delay the move. I have built devices connected to the shutter release port waiting for the voltages to change, but that didn't work. Was hoping that the voltage would drop when triggered internally, but it seems the port isn't wired way. Almost like they might have an opto-isolator on the port or some other method to protect the port but it doesn't allow the voltage to drop when triggered by the camera itself.
This company has needed a management house-cleaning for at least a decade. They're just wandering in the weeds at this point. Does no one there actually pursue filmmaking or photography? It's unreal.
I mean... there's no excuse for any digital camera, especially one made in the last decade, not to have an intervalometer. It's essentially FREE to implement.
But then, Canon's the company that was still pushing interlaced video well into the 2000s and launched the SLR video revolution... with a camera that could shoot 25 and 30 but not 24 FPS.
It wasn't just Canon. There were still a lot of tape based cameras. Non tape based formats were still expensive (P2s etc). It wasn't until the 4K spec came out that interlacing was finally dropped. You have to remember that waaaay back in 1996 when HD first started broadcasting, the signal was still analog and flat panels were not common at all. HD CRTs were a thing, and required interlacing. Just like color TV had to remain backwards compatible with B&W, they chose to make HD backwards compatible with SD.
One myth/legend says a Canon engineer not in the DSLR department realized the ability of the camera's chip, and mentioned that he could make it record MP4 video sort of as a fluke. It wasn't something they set out to do. It worked, so they rolled it out. I don't know the validity of that story, but it sounds like Canon didn't expect the feature to be that popular. As all v1 products, somethings just weren't there like 24fps. I can't remember if any video cameras at that time were full framed sensors. I'm thinking 2/3" or 1" sensors were the norm. The shallow depth of field that the full frame sensor brought to the ~$2500 market is what did it, as well as a readily available large selection of lenses. No more ENG look for video without shooting film.
They do have wifi on some models, but the ap and linking isn't trivial. I haven't bought a camera in years, but they were also selling GPS ad ons where everyone else was integrating it into their cameras.
I have a 5DmkII and the quiet mode and image quality are quite astounding when shooting indoors. (I'm a community orchestra's photographer)
Or even better the PC Sync port - it's quite a simple plug, easy to get one and connect to the camera, the hot shoe is a bit fiddly to connect to (unless you have a spare shoe and are willing to modify it).
Also setting flash sync to the second curtain will close the circuit just before the exposure ends instead of when it starts, which may be beneficial for interval calculations with long exposures.
Edit: the circuit is insulated from the camera electronics since some flash lights can send 250V or even more trough the PC socket - it acts as the pathway for the charge from capacitors to the bulb (in some, esp. older strobes).
The crazy thing is that I come from a video engineering background, and image acquisition with just a histogram is still foreign to me. Give me a good waveform for exposure and vector scope for color, and I'm much more confident about the image. I want my camera cart to look like a DIT station!
Would be interesting to a/b your ETTR and 'correctly' exposed images to your friends without letting them know.
This shows a pure comparison (it should link to the right time-mark). First shows some before/after full shots then does a split screen comparison in various situations:
It would also be nice to see it without Youtube's compression, if anyone has a better link.
(Your second video seems much more in line with what I'd expect, but it's still hard to see clear differences with Youtube's compression.)
Many choices must inevitably be made along the pipeline from estimated-electron-count-per-sensor-pixel -> image on a display. Default choices are not inherently more correct (or truer to the scene or whatever) than deliberate choices.
The fairest comparison is probably to find someone highly skilled at image editing and get them to try to make the best output images they can from both inputs. You’ll get to see places where the standard processing and compression actually lost data.
Often the processed-in-camera version still has enough data that a more-or-less comparable output image can be produced, but if you start peeping on pixels you might notice extra noise, banding, blur, ringing, ...
The problem with the comparison videos in this thread is that the person/software deciding what to do with the “raw” data has made a bunch of choices to allocate more contrast to shadow and highlight areas, etc., while the in-camera software made a different choice... and nobody tried to reconcile the two versions afterward.
Comparing the amount of usable data available
"I disagree when I shoot raw and h.264 my camera has the ability to shoot both at once and spit both files out, and these are exactly how it looks no color grading fresh out the camera. "
Here's my stupid question of the day: given that this color-grading of RAW streams makes _SUCH_ a massive difference... why don't companies like Conan and Sony just ship it with the product? Why do their cameras fall short of having something like Magic Lantern? In the end it gives the customer what they want... surely Sony/Canon are capable of producing something like this, so why not produce it?
They can charge this much, because they're competing with high end cinema cameras that are used to shoot blockbluster films, such as the Arri Alexa LF ($100,000) or Red Monstro 8k VV ($79,500).
Some video camera manufacturers have started offering raw at lower price points. The Blackmagic Camera Pocket Cinema 4k shoots 4k raw 60fps for only $1,300. Sensor size micro 4/3s, which is a lot smaller than a full frame sensor that the 5dmk3 has.
It's not just the firmware that makes the difference between a DSLR and a pro video one, but ergonomics and hardware connectivity. Videographers tend to be more demanding on technical support issues too IME.
However, you can get very good cinema cameras for not very much money from Blackmagic that shoot raw video. I have both their 1080p pocket and 4k pocket cinema camera, they are awesome.
Magic lantern is an amazing project too, but reverse engineering the firmware is a lot of effort and never quite bug free
But it’s mostly a business decision I guess.
- e.g. more cost effective to use the same HW on multiple devices but limiting their capabilities through SW by segment => total cost covered by many sold med-end devices AND few high-end devices?
- and/or maybe what "magic lantern" allows to do wasn't certified to work on 100% of the HW.
- and/or "magic lantern" did some own SW development, which would mean "additional costs" if done by a company (and here is where open source hits one of its targets?). Meaning that maybe at a certain point Canon said "ok, we've spent all our budget for this price class of camera vs. the expected earnings, so that's it".
I don't know if these are presets or something done in post-processing, but it almost looks like the video equivalent of filter plugins in some cases.
I'm by no means an expert but color grading video is often a detailed process with a lot of room for finding your own creative "vision". Think of the split toned, green/orange aesthetic that was popular in movies for a while. Doing the color grading can have a major effect (good, bad, or just different) on the mood of a video.
In these examples, I was definitely impressed with some aspects of how curves were adjusted to bring up shadows and make details more visible. But at the extreme end, it reminded me almost of the overkill seen in early HDR photography.
Some tweaking can make things look more realistic and bring out detail but it is very easy to go too far and end up with something that looks like an Instagram filter.
That's mainly why I was wondering if the video examples were in-cam presets when running Magic Lantern, or if it was something the videographer did in post with the extra data captured with the higher bit depth enabled.
It should be theoretically possible to do anything to each video frame that you can do to a still photo, including moderate/tasteful color & contrast adjustments.
Some photo printers love to allocate almost all of their available contrast to large-scale shapes, producing essentially silhouettes. Others like to allocate almost all contrast to local fine detail, leaving the image looking like a gray blur from afar but detailed and crisply textured from close up. Some photographers like their images to be a festival of competing intense colors, while others make nearly monochromatic images in one color or another, or stick to a pastel palette, or make mostly neutral images with a few intense exceptions. Etc.
When someone says a photo or video was printed badly, what they usually mean is that either (a) the printer had shallow aesthetic judgment or boring artistic goals, and/or (b) the printer lacked the skill to effect their artistic vision.
is this false for the factory default?
Currently we have great hardware with terrible software and vice-versa.
A Sony full frame sensor, Zeiss glass, with an A series chip and good software to make it shine.
I did – briefly – and am still recoiling from the horror.
> Currently we have great hardware with terrible software and vice-versa.
The software on even older DSLRs isn't terrible. It does the job it was intended to do quite well. It is learn-able, and it doesn't change its behaviour with an OTA-update you can't easily prevent.
More thought has gone into the UI design of even the earliest, rushed-to-market DSLR firmwares than some of the UI changes that Apple and Google have inflicted on their victims.
> The software on even older DSLRs isn't terrible.
Oh yes it is. Like salt on a fresh wound.
> It does the job it was intended to do quite well. It is learn-able, and it doesn't change its behaviour with an OTA-update you can't easily prevent.
Its menus are completely inscrutable. But I'm not even talking about usability. One can learn anything eventually, given enough willpower.
I'm imagining a processor like the one on the iPhone/iPad Pro acting on all those pixels. And a true software platform running on top of it.
The beauty of a DSLR is that you turn it on, and it's ready to go. Take pictures in raw and process them when you get home. If you want to adjust the exposure and what not, you can; if you do that a lot, you can get the higher model cameras with more dials, and customizable buttons so you don't have to go through menus as much.
And I envy the amount of processing that goes on when you press the button.
From turning a noisy mess from a fingernail size sensor into pretty good JPGs to auto choosing the best photo from crazy fast burst mode, even before you press the button!
And audio/video, oh boy. Hyper slow motion 4k, noise canceling…
Imagine what Apple/Google could do with something the size and weight of a DSLR and 3k US$ budget.
I love my iPhone and I'm happy to go to bat for Apple when I agree with them, but the Camera app on iOS is painfully slow compared to a Canon DSLR. The Canon can wake from sleep and take a photo in the tiniest fraction of a second.
I’d rather have my Nikon, but I wouldn’t be in troubled if I needed a quick shot and only had my iPhone.
Where the iPhone wins is when it’s already in my hands.
Also, you don't even need to. You can swipe from the bottom and pick the camera app from control center.
It's probably 1.5/2s from screen off to photo taken
Apple/Google would add too many features, focus on wifi connectivity, and generally ruin the one amazing thing about cameras–the ability to turn them on and immediately take photos, no fuss. No waiting for software updates, or alerts, or any of the million other things that do not belong in a camera.
The 1DX Mark II (admittedly a very expensive pro camera) goes from physically switched off to first shot in 0.8 seconds, and from standby to image capture with full autofocus in 0.085 seconds: https://www.imaging-resource.com/PRODS/canon-1dx-ii/canon-1d...
My D750 can go from OFF -> ON -> Shutter press -> focus -> picture taken in cca 0.5s in normal light conditions. There are so many moments that I managed to capture only because of this speed, they would be gone in those 3s.
Not even going into "details" like having 5x mechanical zoom on it (24-120mm), I can snap distant scene in 1s in above scenario. And I do snap those scenes quite often.
Shutter response time on my Canon bodies is adjustable between slow and fast. Respectively 55 and 36 milliseconds.
I doubt any phone can even wake from sleep in that time.
I think a lot of professionals and prosumers who are still buying interchangeable lens cameras don’t care. Folks generally shoot in RAW and edit their photos on a PC with Photoshop and Lightroom, which affords people much more control and is much more powerful than whatever is in an iPad.
The only photographers shooting JPEGs are sports and news photographers, and they have specialized use cases. That’s why the pro cameras (1DX2 and D5) come with Gigabit Ethernet and auto-upload to an FTP server through that.
People want the pre-capture sequence to be reliable and easy to use (so lots of buttons and dials) and in general once the photo is captured and saved, they don’t care and don’t want the camera involved any more.
Of course, if people really want to upload to Instagram immediately, most cameras nowadays come with Wifi so they can just transfer photos to a smartphone and do whatever they need to do there.
An iPad Pro has one heck of a CPU/GPU, rivaling high end laptops, where most edits are made.
Having something like that (and custom silicon, perhaps) dedicated to pro photo/video/audio is something we haven’t seen.
Even if the camera can do more post-processing, I am not sure I want it to. I would rather the camera do less after exposure is complete, so I can get more control over it while editing.
Not parent, but FF-lust is objectively bizarre.
Read this (now very old) rant on the subject 
Then consider advances in optics, CAD, sensors, and then marvel at why some people are desperate for an arbitrary sensor size that plucked out of the air more than a hundred years ago.
Nothing spatial in 35mm,
it’s just that we have a ton of amazing glass made and bought to go with it.
And even though there are diminishing returns, quality wise, the more photons the better.
I’m sure they could make APS-C, Micro 4/3, 4x5”, to please everyone. I wouldn’t complain :)
Leica T series is probably pretty close to what they would come up with tho. Solid block of metal, and nothing but screen on the back.
I'm pretty sure that the major difference between a pro and a consumer camera is the amount of easily accessible buttons and knobs.
Precisely this. After 10K shots on a Canon 1200D, I upgraded to an 80D specifically to have access to the top controls to quickly change settings while shooting. You can remap a lot of the 1200D's controls in the "Expert Feature"/customizations section, but the 80D's extra ergonomics are a true joy. The camera is molded to my hands and comfortable to wield for hours at a time.
I can't say the same for the smartphones I shoot with. My Blackberry KeyOne's camera has fully manual controls (even the focus distance can be manually set!), but the "slide your finger to control settings" UI sometimes feels like the "Reddit designs volume control" experiment from a few years ago. I empathize with the Java programmers, but it's extremely frustrating to tell the smartphone what kind of picture I want. It should go without saying that it's equally frustrating to be unable to hold the slab phone adequately still.
Reading people's passionate hatred for real cameras makes it quite clear that they aren't the target audience for a simple mid-range DSLR, let alone monsters like the 1D or Nikon D5.
But they can make great physical buttons. That last iPhone home button was one of the best in the industry.
I bet Jony Ive and Phil Schiller love Leicas
I have never understood why giants like Cannon and Nikon can't match feature set, ease of use etc from smartphones. In many ways this is playing out like it did with Kodak where they set on their traditional things for so long that there was ultimately no point of return.
That way you get to keep lens sizes very small, but your effective sensor size is quite large, and you can do a lot of other cool computational photography tricks with multiple sensors.
I think this is Stockholm syndrome speaking. Most photographers have no problems installing apps and upgrading their phones.
Yet ask them to set a timer, change focus/metering point and it's a nightmare. Let's not even talk about Canon's camera control/export app, whatever it's called.
Software is hard.
I deal with photographers in their 50s/60s weakly. They all love their phones and hate the menu screen on their Canon/Nikon/Sony. You set it up the way you like it and hope to never need to touch it again.
Time how long it takes you to find and change the Exif copywrite on your photos, daylight saving, or how to save JPGs in one card and RAWs in another. It’s a nightmare.
This might blow your mind, but professionally-oriented cameras all have dedicated buttons to toggle metering modes without looking away from the viewfinder.
In comparison, I was trying to set bracketed exposure on my RX100 yesterday, and I still haven't managed to find the feature.
I can operate my Nikons blind folded and single handed. Canons are not my cup of tea, but I can get by if needed to.
Sonys, on the other hand, have amazing sensors and great glass, yet bury such important features under menus requiring pressing the Fn button while staring at the LCD, or something like it.
You might want to avoid making arbitrary assumptions about anonymous debaters, who may just as well be far more experienced than you, especially when said experience is not particularly relevant to the discussion of easy operation.
It's not interesting to know if someone can efficiently navigate an instrument after 25 years of experience with it. At that point, quirky UX ends up entirely concealed by muscle memory, and such person ends up being a poor source of input for UX matters.
>…who may just as well be far more experienced than you
I wasn’t referring to myself. But to photographers in their 50/60s.
> It's not interesting to know if someone can efficiently navigate an instrument after 25 years of experience with it. At that point, quirky UX ends up entirely concealed by muscle memory, and such person ends up being a poor source of input for UX matters.
Agreed. I made the same point a few comments above.
I can't tell how many years of experience someone has from a profile picture. Can you?
> I wasn’t referring to myself. But to photographers in their 50/60s.
You worded it such that it referred to yourself. You can't argue using other people's experience.
What's the issue, then? The Nikon shooting interface obviously works well for you, and I think that is far more important and used than changing daylight savings settings.
> Time how long it takes you to find and change the Exif copywrite on your photos, daylight saving, or how to save JPGs in one card and RAWs in another. It’s a nightmare.
I don't know enough about Nikon or Sony, but I use Canon and I know where all those settings are. Even if you want to search the web to find where they are, I don't think it is a big deal: most of those settings you listed I change once or twice a year.
In any case, I think it is better than the settings app on my iPhone, where the General sub-menu contains everything from controlling Background App Refresh (Why not in individual app sub-menus?) to enabling iTunes Wi-Fi Sync (there is an iTunes and App Store sub-menu too).
For instance, I can't imagine trying to cram the Lightroom UI into a 4" screen on the back of my camera is a good experience, especially when I have a 30" 4K screen at home to edit on. To me, the current Lightroom for iOS is nowhere near as usable as the desktop version, and still quite a bit slower.
What they already do in smartphones, but better. Almost guarantee you’ll never miss a shot (no blinks, blur, bad exposure), better signal processing, etc
> I can't imagine trying to cram the Lightroom UI into a 4" screen
Me neither. I’m not proposing that in the slightest
OK, but that seems to be turning the camera into a point-and-shoot and I am not sure most of what is left of the ILC market wants that (or maybe it is just me). For me, I would ultimately want to control things like exposure, shutter opening, etc... myself (who is to say a "bad" over/under-exposed photo or some blur won't make for a better photo?).
I think people who want a point-and-shoot like experience are by-and-large are happy with smartphone cameras today (especially with advanced processing like Android's Night Sight), and they are not going to carry around a bulky ILC just to take photos.
In any case, I think Zeiss is trying to do something similar to what you want with the ZX1, and I will be interested to see how large that market is.
Not at all.
>For me, I would ultimately want to control things like exposure, shutter opening, etc... myself (who is to say a "bad" over/under-exposed photo or some blur won't make for a better photo?).
100% agreed. But if instead of “shit, I missed this shot” I could have 3 taken before I even pressed the shutter, and one of them is perfect, I'd take it in a blink. Besides all the pie in the sky stuff that I imagine could be done as well.
>I think people who want a point-and-shoot like experience are by-and-large are happy with smartphone cameras today (especially with advanced processing like Android's Night Sight), and they are not going to carry around a bulky ILC just to take photos.
You're probably right. I don't know if it's a viable market, but one can dream.
Sounds like what you might want is a 8K video camera with a ring buffer that gets flushed to storage whenever you press the shutter button :p
About the rest, I've never timed it because I've never needed to do it. Lightroom does most for me, and even the card thing would be roughly a one-time thing. I don't know what a comparison between a 6" touch display on a phone and a 2" non-touch display on a camera is supposed to show, but the camera menu isn't that hard: press menu, scroll to the appropriate section, select menu item and change what you need. It's pretty much the same on phones as well.
Get into the enthusiast / prosumer models and there's more buttons for a reason. I want to change metering on my 5D4? I press the metering hard button on the top of the camera.
The reason people dislike menus is that there's a lot of things to control. It's nice without it, right until you need to tweak something.
Canon's smartphone app is quite terrible (although this is a slam at smartphone UX, not camera UX), but I can't really see smartphone apps for cameras as anything other than a useless gimmick.
However, that app has no relation to how it is to use a modern camera. Comparing the UI's of iOS, Sony's Android camera app, Sony's A6000 and Canon mirrorless/dslr, Canon's modern mirrorless UX is by far the best. Easiest to use, prettiest, etc.
For everything an iPhone can do, Canon works exactly the same, but then Canon also does so much more. Focus point is by touch screen, and it's much more responsive than my phone. It also seems much smarter with regards to tracking. Important settings are in the HUD, rest in an quick menu one touch away.
All of this is of course subjective and anecdotal, but I would by quite saddened to end up with smartphone-d cameras. I do not see it making any positive improvement.
I find computational photography incredibly exiting, but it's mostly taking place in smartphone space, with tiny sensors and lenses. Imagine what we could do with great glass and a huge honking sensor.
You think that highly of smartphone makers?
Would you want them to re-make the aircraft cockpit too?
Yes, please. And a kickass processor with crazy fast memory to go along with it.
>and physical controls honed over decades and used by professional photographers in all kinds of scenarios?
Not at all. We love knobs, rings and wheels with great distinguishable tactile feeling.
A more intuitive settings menu would be a side benefit.
Truly revolutionary would be to able to write first class software to a mature capable OS.
What could the state of the art in demosaic, noise, latitude, color accuracy be with that kind of speed and tools?
Mine is a supercomputer that fits my pocket with battery that lasts all day.
I trust them to make something that can be at least 10X thicker and heavier to performer comparably.
Last all day means nothing in the context of what you brought up.
I don't follow Android's ecosystem much, but battery usage on iOS is pretty restricted and very much under control. It can be done.
Do you shoot JPEGs exclusively? I shoot RAW, so the only thing I care about post image capture is being able to preview my photos, show a histogram, zoom in to check for focus accuracy and delete images. Everything else I can do on my desktop, which will always be much more capable than whatever I can put on-device.
But even RAW is not as “unbaked” as you might imagine. Specially in higher ISO
In other words, what improvements do you think camera manufacturers can add to the post-capture RAW workflow if we had a faster processor/better software?
(Of course, in pre-capture, there is always room for more improvements to metering and autofocus, but camera manufacturers know this and are constantly making adjustments and improvements each generation.)
And of course, all the stuff during capture you mentioned could be a lot better too.
Picture being a camera engineer and suddenly have 10X more speed and a decent platform to work with. 32bit multiple exposure RAW in a single click, who knows.
Yeah, I get that, but I don't think there is an improvement here by sticking a modern smartphone CPU and OS into a camera.
Making improvements to those metrics ultimately require newer sensors with lower read noise and better on-chip ADCs. The camera manufacturers can't do 24-bit RAWs @ 20fps when the sensor only outputs 12-bits @ 10 fps.
If we are doing multiple exposures, then as you know, cameras already have exposure bracketing, and I can do the rest in my PC.
Which big pocket tech companies could do :)
>If we are doing multiple exposures, then as you know, cameras already have exposure bracketing, and I can do the rest in my PC.
Sure, but we have gyroscopes, accelerometers, electronic shutters and 2Ghz processors on this dream device of mine.
Surely we can do better than using a tripod, exposing entire scenes multiple times and try to align and blend them in the computer hours later.
Yeah, unfortunately I don't think big tech is that interested. Unless it is a strategic move, even at $1b/year in profit, it is not worth Tim Cook's time to do this ($1b/year is less than 2% of AAPL profit). I am not sure there is $1b/year in profit to be made across all the ILC manufacturers at this point.
> but we have gyroscopes, accelerometers
These are helpful and I wish cameras would record these and stuff them into the EXIF data somehow, so that I can do better post. I think we just have some disagreements about where post-processing should happen :p
Looks like the are supposedly emulating some of the camera's functions? I don't know enough about this camera for it to make any sense to me.
But in practice they implemented this poorly. They could update the URI with replaceState so you don't break the back button, and ignore keyboard shortcuts if a modifier key is pressed. One of the reasons I like projects with open-source websites - you can just file an issue on the website itself!
Also Sony make sensors for many other camera companies including Nikon.
The biggest issue with the GX-85 for film is it's lack of audio in, that can be circumvented with a seperate audio recorder but it's certainly not perfect.
For the same reason I'm curious that Sony is not more open - their strategy appears to be the opposite of Canon, throwing in the kitchen sink of features into their entry level bodies. And yet they have removed tha ability to load Android apps onto their latest cameras...
With a sample editor that can use VST/AU effects, and the free version of this, or something that has the same functionality: https://www.tokyodawn.net/tdr-nova/
1. take a frequency band, put the Q for it to max or near max, and gain to max.
2. sweep the frequency until you find the one that makes it sound extremely boxy (it's probably below 1k but don't take my word for it, you will hear it when you found it however)
3. now set gain to zero, activate the threshold button, and dial the threshold to a value that just makes the boxyness disappear without making it sound too thin
4. repeat with more frequencies as needed/wanted
5. compress or normalize
> resonances of your room/speakers/headphones
Yet the frequencies that stand out depend so much on the input material, and arent't the same between recordings in different rooms or with different mics, even with the same playback setup. Weird.
As the manual of that VST so terribly states
> This is an excellent tool to correct a boxy low-endsound, even out resonances in a recording or reduce excessive sibilance in a vocal part.
Sure, they don't outline this specific approach, but others do, sadly:
> In my experience visual analysis won’t usually help a great deal here, and you have to adopt more of a ‘hunt and peck’ approach, sweeping a narrow EQ boost around the spectrum and then placing an EQ cut wherever the boost sounds most unappealing
And it just goes on and on :(
Of course, since you said it's such terrible advice, my ears noticing the results are probably lying to me, and I can hardly wait to find out how to do it correctly at last.
Just because something is repeated a lot doesn't automatically make it a great idea: see "Brexit means Brexit" or "Build The Wall" for examples.