Hacker News new | past | comments | ask | show | jobs | submit login
Magic Lantern (magiclantern.fm)
707 points by chris_overseas 29 days ago | hide | past | web | favorite | 235 comments



Recently I got a second hand "Canon EOS M" it's an old Mirrorless, for only $200 USD.

Thanks to ML I can get footage at 2.5K in 12-bit RAW, or pseudo-4K in an anamorphic format. If you're a film student in a small budget, the amazing work from the people behind the project it's a gift from heaven, as the quality it's almost film-like (ready for regular LUTs).

Basically you can have RED or BlackMagic Camera color and HDR kinda-quality with a small budget... and a lot or patience and free time.

Still tho... I would not use ML RAW for a client in production projects (like a wedding or commercial), as most builds are in a "Always-Beta" state, with a lot a bugs and slow editing workflow for a professional setting. So in those cases, I would recommend record in old h264 in a flat profile, and use all other ML features like zebras, and live histogram.

Kudos for the ML Devs, who against all logic, are giving ton of work for the indie film community.


HN tip of the month ;)

When Canon released the EOS M as its first mirrorless camera, it hasn't gone down well in many reviews as it's a slow performer[0]. Now, this has actually been addressed in a later firmware upgrade, but at that point the damage had already been done. After that, Canon basically dumped this camera on the market at very low prices.

Good thing for us is that this little camera can now be bought very cheap second hand (got mine last year at around $100) Once you load up ML, you get a fantastic fun little camera! I'm not very experienced with shooting video but for photography, it's a wonderful experience. Focus peeking, magic zoom, interval photography, all for free.

Many kudos to the ML devs from this side as well!

[0] https://kenrockwell.com/canon/eos-m/m.htm


Thanks, I went and found an EOS M for sale after reading your comment and bought it. Turns out there’s a shutter bug and to get around that I need a specific brand and model of SD card. But I am enjoying the camera so far. Looking forward to capture raw video with magic lantern after I buy that SD card.


I'm sort of amazed at the cost of high-end cameras, especially when what separates it from a modern smartphone seems to be nothing but bigger/faster sensors and the ability to use big lenses. The camera manufacturers use the lowest powered CPUs they can get away with, running proprietary real-time operating systems with horrible UX and no third party support. For the price of a high end laptop. It really seems like something is off in the market - but there doesn't seem to be any sort of real change happening.

I needed to do some video work last year, so I basically followed Casey Neistat's suggestion for a Canon camera, only to find out it wouldn't allow me to export the video live via an HDMI cable without the camera's overlay on top. Turning all the elements off still left me with a red dot in the corner. Apparently I got the wrong model and the one $800 more expensive would do this. After a while I gave up, went back to the store and got a cheap video camera that could do it, BUT again, I had to get the more expensive version, because the same model in the cheaper option didn't allow this particular use case.

I was really surprised by how backwards this all was! Proprietary OSs, artificially limited functionality, anemic processors, interfaces out of the 1980s. It's crazy!

Has someone slapped Android on a decent camera yet?


They're not for everyone, but I think you're overlooking quite a bit. The larger sensor is crucial in low-light situations, and a smaller phone camera just won't handle that. The "horrible UX" is faster by a significant margin, partly by being a dedicated device, partly by having a lot of dedicated buttons, and partly because the response time on pressing the shutter release button is extremely short. If you want to carry a camera for capturing events or moments quickly, I'm not sure that a phone will do that for you.

And yes, the high-end full-frame DSLR costs are on par with high-end laptops, but IMO you get 90% of the benefit for a fraction of the cost with an entry- or mid-level crop sensor.

As for the UIs, I'll definitely acknowledge they could stand some updates, but the way you describe it sounds like an overstatement. What would you improve, exactly?


I'm very familiar with the staggering difference in image quality between a smartphone and a DSLR - I have a full-frame myself.

In terms of the UI however, it's definitely very out of date.

When I upgraded my camera, one of the advertised features of the new one was that they added an extra digit to the internal intervalometer. Instead of being able to go between 0-999, the new one could go from 0-9999.

The fact that this couldn't be a simple software update still astounds me.


A photo management UI on par with smartphones. Fast, easy to use and easy to do edits and sharing with. Along with GPS, WiFi & LTE connectivity that doesn't suck. Maybe direct app integration with instagram and other apps. Put an OLED screen on the back, not just the viewfinder.

Basically slap a fucking android phone on the back when browsing photos. Make it a separate CPU if necessary.

Embrace the 'casual camera' body style more. The viewfinder should not be a big chunky thing in the center, but on the side and a pop up if necessary. Get rid of grips and make them a uniform rectangle, and add a battery grip in the box to make it slr grippy. People are scared of slr style bodies, while they are not scared of casual camera bodies.

They will continue serving the standard pro market who want the big chunky slr style bodies, this is how they can capture the 'high quality life memory / instagram influencer' market better.


Are you actually, and unironically, suggesting that cameras should be replaced by smartphones? I mean, edit on a camera? If that is a serious use case for you then you can make do with your smartphone, you don't actually need the accuracy of dedicated hardware and optics.


The mass market casual camera today is a smartphone, which comes with what you describe.

Canon, Nikon, and others have long decided they can’t and won’t compete with smartphone makers and are increasingly moving up-market to the prosumer and professional markets who want “big chunky SLR body styles”, and they use specialized software like Capture One and Photo Mechanic to view/edit/organize photos anyway.


While I will agree SLR body styles are easier to use we don't want them, it's that we accept that the size and weight are the price we need to pay for the performance that you simply can't get out of a phone.

These days when the shot is easy the phone is at least in the ballpark with the big guys. When the shot is hard the phone isn't even in the running.


There's also the whole world of non-SLR pro cameras.


What's funny is that Samsung actually made an Android DSLR several years ago, called the Galaxy NX.

It ran full Android, you had all your file management, Bluetooth and wifi, could play games if you felt like. Could encrypt the filesystem, too, which I think is super important for journalists.

Basically nobody noticed or cared.


Zeiss of all companies is going to make something like that (zx1). I even like some aspect of the design.


Oh, please god no. Keep your android OS away from my mirrorless camera. Can you imagine having to take the camera away from your eye to use a finger to slide a UI slider to increase/decrease shutter speed?

I have a Panasonic and the OS is pretty freaking great. 10 to 20 minutes of poking and around and maybe a few hours of real world usage and I can customize my own buttons and quick menu for what works for me. I don’t want some bloated POS andriod iOS apps on my camera.

As far as the hardware is concerned. Yeah, there is a lot of artificial segmentation in the market. I think this is why the camera market is failing badly.


This. My Nikon D3s is covered with physical buttons and dials. I can change shutter speed, aperture and ISO all the while looking through the viewfinder. I've also set the function button, rather than the shutter release button, to trigger autofocus - meaning that I don't lose shots when the camera forces a refocus just when I want to take the picture (e.g. pre-focused wildlife shots where I set the focus to where an animal is going to be). With practice comes muscle memory: slow is smooth, and smooth is fast.

The menu UI is somewhat clunky, but does enable function discovery. There are no Nikon-specific swipe gestures to learn, for example.


You can have an android camera with physical buttons. Samsung did that a few years ago.


> only to find out it wouldn't allow me to export the video live via an HDMI cable without the camera's overlay on top.

Funnily enough this is a feature of Magic Lantern which is especially useful if you want to use a 3rd party HDMI recorder to record longer than a SD card or the default OS would allow https://wiki.magiclantern.fm/faq#how_do_i_record_for_more_th...


NICE! I skimmed through the FAQ for this but missed it. I still have the Canon, so I'll have to try this out. Thanks.


I don’t know if it still applies but it used to be that a camera that allows you to record more than a number of minutes at a time is taxed as a video camera, which would be an explanation for artificial limitations.


Has someone slapped Android on a decent camera yet?

Samsung did this: https://www.dpreview.com/reviews/samsung-nx1 but gave up the market in 2015.

That was likely a good idea from a cost-benefit perspective, as standalone cameras continue to decline in unit volumes: https://www.dpreview.com/news/7719403699/cipa-s-november-num..., but it's a shame from a technological progress perspective. You are totally correct that a camera body really ought to be an Android device, but camera makers have not done that—which likely contributes to their fall in unit sales.


My low-end Canon DSLR from 2011 powers on and is ready to shoot in ~2 seconds. I can keep the camera turned on for hours at a time (with the screen off except while taking pictures) without any noticeable effect on the battery charge level. Can you imagine an Android-based camera body achieving something like that?


Abolutely. An android phone can last a week in airplane mode and a low power CPU state.


Zeiss are going to release a full-frame digital camera soon that runs Android [1][2]. The ZX1 also comes with a built-in version of Adobe Lightroom.

[1] https://www.theverge.com/circuitbreaker/2018/9/27/17911212/z...

[2] https://www.dpreview.com/news/3430444458/zeiss-announces-ful...


You're not wrong about the software being bad, but the actual UX involves physical buttons and dials. Haptics are very important unless you photograph completely static scenes from heavy tripods.


I agree.

Having a Nikon and being able to shift shutter speed / aperture with a dial is very important.

You can kind of think of this as a remote control. Yes you can have a touchscreen as the only interface for a remote control, but you always have to look down at it to use it.

On my remote I can control volume and channel without looking at it.


I disagree, what you can get in a inexpensive body these days is pretty crazy. Sony has been pushing the envelope pretty hard the past few years and been putting out some truly amazing cameras for unbelievable prices. The A6000 at $800 was crazy at the time, and the A7iii last year at $2000 definitely surprised Nikon and Canon with their mirrorless they were developing.

A full frame sensor can easily cost $600-$1000 without even considering anything else, yet these cameras are in the $2000 range and include highly sophisticated stabilization systems and professional video features (e.g. 4:2:0/4:2:2 output). Those are typical to low margins for a technology product.

Regarding your complaints. Realize that higher end cameras are meant to be a professional tool. Thus they tend to favor conservative reliability and interface continuity over anything else. The camera app crashing like it might on your Android phone is unacceptable on a camera, so they run more reliable and specialized code. Though speaking of Android, Sony cameras do run a modified version of Android, the previous few generations of cameras had somewhat of an app store, and people even got hacked APKs running on the camera to add missing functionality.

I do agree that camera manufacturers are purposely leaving out software features though. Stuff like an intervalometer and focus stacking should be standard on every camera. Plus, as you mention, some vendors are hobbling features to try to get users to buy more expensive cameras (like a 1.7x crop for 4K video on Canon, because they really want you to buy a C200 @ $7500).

To expand a bit more on interfaces, there is an interesting argument for and against it. It is no surprise that the camera industry is shrinking, and that the people most likely to buy new cameras are going to be those that already own an interchangeable lens camera. Thus manufacturers don't want to upset their customers by changing their interface too much. Companies like Hasselblad that produce more specialized luxury cameras have come up with more modern interfaces for their recent cameras. While I would personally love to see Canon/Nikon/Sony step up here and create a proper, modern touch interface for these cameras, they won't anytime soon. In everyday use, most of the interface with a camera is with physical controls, at most your common interactions with the software is changing options that are part of a quick menu.


Much of the cost is in the glass. There's only so much they can do to make them cheaper without sacrificing quality, at least until metamaterial lenses get cheap enough to obsolete glass. That's why smaller mirrorless camera lenses and prime lenses for DSLRs tend to be cheaper. Primes have fewer elements to craft, align, and make work together. Mirrorless cameras just need less glass.

You don't need the latest and least supported by third-party tools camera for most things. A $500 body with a $100 prime lens will outperform a $1500 camera with a kit lens.


Not sure that completely addresses the parent comment's point. Glass is definitely expensive for good reasons (watch some YouTube videos on optical lens manufacturing to see why), but even just a camera body for something like an EOS 6D is still $1000. The sensors can't be that much, full-frame or no, so where is the added cost coming from? Certainly not software or on-board processors. To the parent comment's point, you can get a full-blown flagship smartphone with a 6 inch high resolution screen you can actually use and a good amount of memory baked in for that price.


My 6D just works. Every time, always.

It functions like an instrument that is designed to be used. Treated with reasonable care, from -30C to 100%-humidity-condensing conditions at the freezing point, it has consistently gotten the job done. It is compatible with a huge range of glass, and it all works. The interface stays out of the way -- when I'm making images, I'm never worried about the camera, just the photography.

When I'm working, the camera is always powered on. The battery lasts for weeks in that mode, yet when I depress the shutter halfway, it begins to lock focus in tens of milliseconds. With gloves on.

When these cameras are released, there are rarely, if ever, hardware revisions. They work that well on the day that they are shipped to the first consumers.

For me, the cost (and the value) is in the R&D and the ecosystem.


There are a lot of mechanical controls on DSLRs, and they see heavy use over long periods of time. Building those to withstand hundreds of thousands of operations, and making them dust and water sealed is probably not inexpensive.


Then again I don't think it's that expensive either.

I suspect the problem here to be scale.


$1,000 retail thing is generally $300 bill of materials.

I suspect that those are above a $300 BOM given the display, sensor, and plethora of mechanical things (buttons, threads, etc.).

Mechanicals are very expensive compared to electronics. And cameras have LOTS of mechanical thingys.


The shutter for a typical DSLR can be expected to last for 100,000 shots, if not longer, while opening and closing in 1/8000th of a second. It's impressive engineering.


Probably so, my main point was that comparing it to a plastic box with a few buttons and a big screen isn’t really a fair comparison.


Good point about metamaterials. I am doing a PhD in nanotechnology (electronics), and the advances I see in optics at that scale really hint at revolutionary products. Manufacturing still needs a few improvements, though. And I wonder what the market reaction will be? I wouldn't be surprised if photographers were quick to dismiss those millimetre-thin lenses at first.


Thanks for the article. Developing a wide range of highly competitive nano-materials is a serious issue today. There are many good nanomaterials companies but Modern Synthesis Technology company http://mstnano.com/ru/products/graphene/ is one of the best I've worked with. They are known for their reliability, exceptional quality, time and cost efficiency. And they provide international supply of the most up-to-date nanomaterials and cooperate with the leading companies on the global market.


I read about some big news on using nanoparticles for getting rid of cancer. I can't find the article right now but it had to do with nanoparticles which could safely and specifically go to cancer cells and with a certain frequency it would heat up the beads and kill the cancer cell. That was pretty cool, I did a half ass job of explaining it though. Really though, most the stuff I see for nanotechnology is in the field of cancer research.


I am wondering how far we are from creating nano robots that enter the body and fixes it. Like nano robots that enter the bloodstream and cleans up, can fix wounds etc. More like an improved immune system. Or is this fiction?


When using a DSLR as a still camera for events, having essential controls on physical buttons with no lag is critical. Meter, stop, focus, fire. Eyes on the subject, sometimes through the finder.

If Canon sacrifices any of that for "extra" features, the pros leave for Nikon. And vice versa. These are still cameras first, everything else second.


>what separates it from a modern smartphone seems to be nothing but bigger/faster sensors

The bigger/faster sensors are a big part of the price. The other big factor is all of the buttons and dials to quickly change settings without having to look away from the scene. The fast shutter, mirror, and optical view finder are also big contributors. Look at the assembly animation below [1] of a Canon 10D to see all of the tiny parts packed into a small, ergonomic package, and you'll get a better idea of why they cost so much. Not to mention I've killed 3 phones by drops no higher than 4 feet, while I've dropped my DSLRs many more times and from higher points and they show no signs of damage.

[1] https://www.youtube.com/watch?v=6-HiBDLVzYw


>I'm sort of amazed at the cost of high-end cameras, especially when what separates it from a modern smartphone...

It's not about the software, it's about the build quality (once you get into the professional product lines anyway). I have an old Canon 7D that's damn near indestructible (think old Nokia-phone style). I'd dropped that thing in lakes, had it out shooting during monsoons and snowstorms, rolled it down mountains, buried it in sand, dropped it from serious heights (shattering more than a few lenses along the way) and it just keeps ticking. The body is pretty banged up and I've had to clean/replace a few internal parts from the wear-and-tear but it just keeps on ticking.


They want to sell you a new camera when they come up with new software-defined goodies. Or sell you two cameras, if you need good video and good stills. Until someone changes the business model to be something else than pay-for-the-camera, its probably going to stay.


If they put in a good CPU and use a standard OS, they would have more trouble selling you a new camera with newer features, since you could just upgrade your old one.


I doubt that's why people upgrade. If you made the sensor replaceable, people would still be using digital cameras from 1996 except with the latest sensor.

I have several replaceable-sensor cameras and the bodies/lenses were made before I was born. They still take great pictures because it turns out that focusing light onto stuff is largely a solved problem. (They don't even use batteries, much less need an up-to-date OS! But I will admit that carrying a separate light meter is a little annoying.)

Yes, there will always be people that need a 1 billion point autofocus system, and that will require some heavy software engineering. That is a niche use case, even more niche than needing a digital camera to begin with.


Big cpu means shit battery time, and thus a shit camera. And locking into a specific hardware configuration in a field that is basically all about making hardware developments seems kind of short sighted.


Samsung's Galaxy NX range ran Android but failed to gain mass adoption. I think there is a niche market for this and hope someone else takes it up - apparently Yongnuo has one coming.


> apparently Yongnuo has one coming

Zeiss too...

https://www.dpreview.com/news/3430444458/zeiss-announces-ful...


I now hold the belief that any Canon camera that does not have Magic Lantern installed is broken. I use it on my 5DmkII, and it's an entirely different camera. I love the features that ML brings to me for shooting stills (I don't shoot video on my DSLR). An internal intervalometer is worth its weight in gold. The ETTR ability is also priceless for chasing sunsets/rises. For how I use my mkII, the only thing that I have that's nit picky about ML is that it can't handle intervals less than 5 seconds. When it comes to requiring shorter intervals, I do pull out a wired intervalometer, but that's rare.

One thing that I'd love to figure out is how to have ML control the timelapse, but have the camera signal to an external motion controller. Usually, the motion controllers want to control the camera so everything is in sync. However, as ETTR increases the shutter time chasing a sunset, the motion controller needs to know to delay the move. I have built devices connected to the shutter release port waiting for the voltages to change, but that didn't work. Was hoping that the voltage would drop when triggered internally, but it seems the port isn't wired way. Almost like they might have an opto-isolator on the port or some other method to protect the port but it doesn't allow the voltage to drop when triggered by the camera itself.


I'd expand that to "Canon is broken."

This company has needed a management house-cleaning for at least a decade. They're just wandering in the weeds at this point. Does no one there actually pursue filmmaking or photography? It's unreal.

I mean... there's no excuse for any digital camera, especially one made in the last decade, not to have an intervalometer. It's essentially FREE to implement.

But then, Canon's the company that was still pushing interlaced video well into the 2000s and launched the SLR video revolution... with a camera that could shoot 25 and 30 but not 24 FPS.


>But then, Canon's the company that was still pushing interlaced video well into the 2000s and launched the SLR video revolution... with a camera that could shoot 25 and 30 but not 24 FPS.

It wasn't just Canon. There were still a lot of tape based cameras. Non tape based formats were still expensive (P2s etc). It wasn't until the 4K spec came out that interlacing was finally dropped. You have to remember that waaaay back in 1996 when HD first started broadcasting, the signal was still analog and flat panels were not common at all. HD CRTs were a thing, and required interlacing. Just like color TV had to remain backwards compatible with B&W, they chose to make HD backwards compatible with SD.

One myth/legend says a Canon engineer not in the DSLR department realized the ability of the camera's chip, and mentioned that he could make it record MP4 video sort of as a fluke. It wasn't something they set out to do. It worked, so they rolled it out. I don't know the validity of that story, but it sounds like Canon didn't expect the feature to be that popular. As all v1 products, somethings just weren't there like 24fps. I can't remember if any video cameras at that time were full framed sensors. I'm thinking 2/3" or 1" sensors were the norm. The shallow depth of field that the full frame sensor brought to the ~$2500 market is what did it, as well as a readily available large selection of lenses. No more ENG look for video without shooting film.


Exactly none of Sony’s latest full frame mirrorless cameras have a built in intervalometer, and there is no way to install one. So it’s not just Canon.


You'll be happy to know that it's supposed to come in the April firmware update, though.


Cascable Pro works okay for me IMO, even though it is indeed a third party solution.


they have an intervalometer. Its embedded in the over $100 dollar remote they sell. I've borrowed and used one, its terrible, though it does work. [1]

They do have wifi on some models, but the ap and linking isn't trivial. I haven't bought a camera in years, but they were also selling GPS ad ons where everyone else was integrating it into their cameras.

I have a 5DmkII and the quiet mode and image quality are quite astounding when shooting indoors. (I'm a community orchestra's photographer)

[1]"https://www.google.com/search?client=safari&rls=en&q=Canon+T...


For synchronizing motion, you can use the camera's hot shoe. It will be closed during the exposure and open once it's complete.


> you can use the camera's hot shoe

Or even better the PC Sync port - it's quite a simple plug, easy to get one and connect to the camera, the hot shoe is a bit fiddly to connect to (unless you have a spare shoe and are willing to modify it). Also setting flash sync to the second curtain will close the circuit just before the exposure ends instead of when it starts, which may be beneficial for interval calculations with long exposures.


I've tried the flash port with rear curtain syncing, but still didn't do what I needed it to do. Mainly, I had difficulty with the timing of the signal. It's not like the shutter release where there's voltage and then there's not. It's an obvious thing to test for. The flash was just a quick blip. Maybe it was the low cost meter that I have, but when the flash triggered, it barely even registered on my meter.


Yes, it should be a short blip. It's just closing the circuit for a brief moment to allow the discharge of power from a capacitor into the bulb. Hard to detect by a meter, but should be easy to pick up by an Arduino or something like that, maybe even a simple circuit.


A (cheapish) multimeter is going to be too slow to catch the signal from a flash shoe, however any cheap microcontroller or electric circuit should catch the signal with ease.


So, any suggestion on what the voltage from that port would be. That was my hang up. Since the multimeter wouldn't register, I was just guessing. I started at 5v similar to the shutter release, but never could get a reading from it to register on my Arduino.


There's no voltage. Maybe your meter could register drop in resistance (from ~∞ to ~0). Think of it as simple push button. Or, to put it differently, you need to supply your own voltage and it will flow for a moment when the camera triggers the flash (closes the circuit), and you can register the spike.

Edit: the circuit is insulated from the camera electronics since some flash lights can send 250V or even more trough the PC socket - it acts as the pathway for the charge from capacitors to the bulb (in some, esp. older strobes).


Can confirm this works on 5d3. Picked it up with Raspberry Pi GPIO.


The ETTR ability is also priceless for chasing sunsets/rises. >>> Do you mind explaining how you use this?


First, you have to enable the 'ettr' module. Once enabled, select the Expo->AutoETTR setting. For timelapse chasing sunsets/rises, I set it to Always ON so that it will determine what to do on each frame. For sunsets, I also set the 'Slowest shutter' setting to what works in that environment (city scapes ~2secs, dark sky locations ~20-30secs). The concept is trying to push your exposure to the limits of the digital sensor. This enables the shadows to receive extra exposure while counting on the fact that you can recover a certain amount of highlight that would normally look over exposed. In fact, if you shoot JPEG images, it will be over exposed. Only shoot RAW. ML evaluates the exposure of the current frame, and then looks to see if it needs to adjust it up/down. As the scene gets brighter, it will speed up the shutter speed. As the scene gets darker, it slows down the shutter speed. If the 'Slowest shutter' speed is reached, ML will then try bumping up the ISO to keep the exposure the same. My mkII stops at 1600.


ETTR is an acronym for 'expose to the right'. It's fine for sunsets. But are portraits trickier because the sensor's gamma curve is optmized for skintones falling within a certain IRE range? I come from the world of Arri and Red where this is a thing - but perhaps the high dynamic range of stills makes this a non-issue.


I've had this conversation with portrait guys that think I'm crazy for pushing the exposure that hard. Granted, I don't shoot a lot of timelapse with models ;-) I do know that pulling skin tones back out of shots that were 'accidentally' over exposed was tough/near impossible. As with all things, each camera has its unique qualities/abilities. Whenever I talk ETTR with people, I always suggest testing the limits of each camera body and the ability in post to recover the highlights. Test, test, test, and then know those limits so you can get the max ability from each piece of gear.

The crazy thing is that I come from a video engineering background, and image acquisition with just a histogram is still foreign to me. Give me a good waveform for exposure and vector scope for color, and I'm much more confident about the image. I want my camera cart to look like a DIT station!


Yeah, I mean I think it would be such a minimal difference in recovered highlights with a raw image where you've ETTR. I think testing the sensor limits is a great philosophy, but also letting go and focusing on the 'artistic' subtleties over the technical subtleties of an image becomes more important.

Would be interesting to a/b your ETTR and 'correctly' exposed images to your friends without letting them know.


Sensor response is pretty much linear. That's why you want to shoot in RAW which gets you file without a tone function applied. Whereas with JPEG, it's not only applying a tone function but a contrast function as well which aggressively tosses highlight and shadow detail.


reflex cameras are broken lol


I recently purchased an 5d mk iii and installed magic lantern on it. The 14bit raw video is AMAZING. Also, this particular camera can now record at, I think the limit is 3.5k vs 1080p from the factory.

https://youtu.be/6yKbwXYmpD0


I don't think that's the best example. A lot of those post-shots were extremely over-processed, blown out highlights, distorted colours, etc.

This shows a pure comparison (it should link to the right time-mark). First shows some before/after full shots then does a split screen comparison in various situations:

https://youtu.be/WTr2L3uCYts?t=80

5D3 comparison:

https://youtu.be/6Pvy-J7kNTQ?t=30


That first video doesn't seem like a great example either. The RAW footage has clearly been color graded, so it's not a one to one comparison. It would be better to see some of the limitations of trying to regrade the camera's default h264 output, and a true comparison of the difference in sharpening. The RAW footage has clearly been sharpened to some extent because there's quite visible ringing in a lot of those shots.

It would also be nice to see it without Youtube's compression, if anyone has a better link.

(Your second video seems much more in line with what I'd expect, but it's still hard to see clear differences with Youtube's compression.)


There’s not much meaning to a “non-color graded” or “non-sharpened” picture.

Many choices must inevitably be made along the pipeline from estimated-electron-count-per-sensor-pixel -> image on a display. Default choices are not inherently more correct (or truer to the scene or whatever) than deliberate choices.

The fairest comparison is probably to find someone highly skilled at image editing and get them to try to make the best output images they can from both inputs. You’ll get to see places where the standard processing and compression actually lost data.

Often the processed-in-camera version still has enough data that a more-or-less comparable output image can be produced, but if you start peeping on pixels you might notice extra noise, banding, blur, ringing, ...

The problem with the comparison videos in this thread is that the person/software deciding what to do with the “raw” data has made a bunch of choices to allocate more contrast to shadow and highlight areas, etc., while the in-camera software made a different choice... and nobody tried to reconcile the two versions afterward.

Comparing the amount of usable data available


In the GP's YT link, many people are talking about the color grading. One commenter said:

"I disagree when I shoot raw and h.264 my camera has the ability to shoot both at once and spit both files out, and these are exactly how it looks no color grading fresh out the camera. "


That video, with the pre- and post- differences is pretty remarkable.

Here's my stupid question of the day: given that this color-grading of RAW streams makes _SUCH_ a massive difference... why don't companies like Conan and Sony just ship it with the product? Why do their cameras fall short of having something like Magic Lantern? In the end it gives the customer what they want... surely Sony/Canon are capable of producing something like this, so why not produce it?


Price differentiation. Canon sells video cameras that shoot raw, they're 10x the price of a DSLR. (see the canon eos c700 ff, $33,000).

They can charge this much, because they're competing with high end cinema cameras that are used to shoot blockbluster films, such as the Arri Alexa LF ($100,000) or Red Monstro 8k VV ($79,500).

Some video camera manufacturers have started offering raw at lower price points. The Blackmagic Camera Pocket Cinema 4k shoots 4k raw 60fps for only $1,300. Sensor size micro 4/3s, which is a lot smaller than a full frame sensor that the 5dmk3 has.


Quick plug for the BlackMagic eGPU. I had not heard of Blackmagic before, because I don't do film work. But Apple partnered with them on a pair of eGPUs and I really dig the base one. It is silent, creates a useful dongle-free set of ports and obviously improves macOS graphic responsiveness when working with 4k external monitor from the 2018 MacBOok Air.


I wonder if it has anything to do with distinguishing a video camera from a stills camera for tax purposes:

https://uk.reuters.com/article/tech-eu-cameras-trade-dc/eu-t...


They do...on different models, and after they've had the opportunity to assess market demand. Naturally they would prefer people buy tools made for the job than cheap out with modifiers, but on the other hand they've taken a hands-off approach to Magic Lantern rather than complain about their intellectual property being infringed, so I'm OK with that.

It's not just the firmware that makes the difference between a DSLR and a pro video one, but ergonomics and hardware connectivity. Videographers tend to be more demanding on technical support issues too IME.


Feature gating/price differentiation. You know what's ridiculous? Their shutter timer doesn't go above 30 seconds on any of their cameras. Magic Lantern obviously has no problem giving you a UI to adjust the timer for whatever value you want, and it's worth it just for that. Canon want you to buy the timer/wired trigger peripheral.


Canon still reserves a lot of space for their C line cameras. They have been very hesitant to let their SLR / Mirrorless market creep into that realm. The 5d revolutionized video production, but they've always crippled their photography cameras just enough to matter to professionals.


They do, but on their higher end video cameras (Sony FS range, Canon C200). They could easily do it on normal dSLR's but have chosen not to.

However, you can get very good cinema cameras for not very much money from Blackmagic that shoot raw video. I have both their 1080p pocket and 4k pocket cinema camera, they are awesome.

Magic lantern is an amazing project too, but reverse engineering the firmware is a lot of effort and never quite bug free


I'd be curious to hear from the parent of your post what the user experience is like, especially after using it for a long time. Raw video is a huge amount of data. The camera's storage and buffering system might work for short clips, but it might also be a real pain in the ass to use.


I wonder if there are thermal issues that may affect the life of the sensor?

But it’s mostly a business decision I guess.


Maybe a mix of multiple factors including "business"?

- e.g. more cost effective to use the same HW on multiple devices but limiting their capabilities through SW by segment => total cost covered by many sold med-end devices AND few high-end devices?

- and/or maybe what "magic lantern" allows to do wasn't certified to work on 100% of the HW.

- and/or "magic lantern" did some own SW development, which would mean "additional costs" if done by a company (and here is where open source hits one of its targets?). Meaning that maybe at a certain point Canon said "ok, we've spent all our budget for this price class of camera vs. the expected earnings, so that's it".

Cheers


Market differentiation/segmentation. This happens in lots of software-supported technology products, the cheaper version is often the same software as the more expensive version - but with a few flags switched to off.


Most probably because it would compete with the Canon EOS product line.


The full name of the 5d mark II is Canon EOS 5d mark II.


I meant the cinema EOS line...


Almost all the color graded versions seem to have a sepia tone. Is that on purpose?


As a Nikon user, I have no direct experience with Magic Lantern (always a bit bummed about that) but I was also wondering the same.

I don't know if these are presets or something done in post-processing, but it almost looks like the video equivalent of filter plugins in some cases.

I'm by no means an expert but color grading video is often a detailed process with a lot of room for finding your own creative "vision". Think of the split toned, green/orange aesthetic that was popular in movies for a while. Doing the color grading can have a major effect (good, bad, or just different) on the mood of a video.

In these examples, I was definitely impressed with some aspects of how curves were adjusted to bring up shadows and make details more visible. But at the extreme end, it reminded me almost of the overkill seen in early HDR photography.

Some tweaking can make things look more realistic and bring out detail but it is very easy to go too far and end up with something that looks like an Instagram filter.

That's mainly why I was wondering if the video examples were in-cam presets when running Magic Lantern, or if it was something the videographer did in post with the extra data captured with the higher bit depth enabled.


You can’t use in-camera presets if you want RAW. That was shot RAW, with the expanded bit depth that Magic Lantern affords. Then, it was loaded onto a computer and graded. I personally only use the presets to get more depth out of the space-efficient photo/video encodings by lifting blacks before they get cut off or turn out so dark you can’t get accurate colour out of them, or to do that in the digital viewfinder only before saving as RAW. The more finished-looking presets are fun for previewing but not all that useful otherwise.


Presumably the added greenish-yellowish cast and over-the-top HDR contrast mangling were intentional.

It should be theoretically possible to do anything to each video frame that you can do to a still photo, including moderate/tasteful color & contrast adjustments.


This is poetic, my eye is good but I can only hope to half your rhetoric.


He's just doing a popular look after he downloads the video from the camera. It's sepia just because the popular look at the time was sepia and very bright.


The term color grading should probably put in quotes here. A few looked pretty great to me but a lot did not.


Why quotes? That's exactly what color grading is, global adjustments to the video.


As I and wikipedia understand it, "Color grading is the process of improving the appearance of an image..."


I mean, even then, "improving" is entirely subjective. Color grading is the process by which you edit a video to perform global color/tone adjustments, but I don't understand it as being strictly about improving, you might want the video to look old, in which case you'd grade it in a way that it deteriorates.


Really it is not that subjective, there is an art to it, there is some nuance to a mood or style you are going for but there is a definite good and bad.


“Good” or “bad” printing (to use the darkroom photo term) can only really be defined relative to artistic intentions. There are many possible choices to make in producing a final image, many competing aesthetic goals which cannot all be satisfied simultaneously, and no “right” answer.

Some photo printers love to allocate almost all of their available contrast to large-scale shapes, producing essentially silhouettes. Others like to allocate almost all contrast to local fine detail, leaving the image looking like a gray blur from afar but detailed and crisply textured from close up. Some photographers like their images to be a festival of competing intense colors, while others make nearly monochromatic images in one color or another, or stick to a pastel palette, or make mostly neutral images with a few intense exceptions. Etc.

When someone says a photo or video was printed badly, what they usually mean is that either (a) the printer had shallow aesthetic judgment or boring artistic goals, and/or (b) the printer lacked the skill to effect their artistic vision.


I suspect that was just the aesthetic preference of the color grader.


> On Canon when shooting "RAW" you get 14 bit depth (.CR2 files)

is this false for the factory default?


.cr2 files are photographs and a lot of cameras can shoot raw, even those much cheaper than a 5d. The magiclantern lets you shoot raw video.


Oh my god they put a tiny electronic Wes Anderson in your camera!


Imagine if Apple and Google made full frame cameras.

Currently we have great hardware with terrible software and vice-versa.

A Sony full frame sensor, Zeiss glass, with an A series chip and good software to make it shine.


> Imagine if Apple and Google made full frame cameras.

I did – briefly – and am still recoiling from the horror.

> Currently we have great hardware with terrible software and vice-versa.

The software on even older DSLRs isn't terrible. It does the job it was intended to do quite well. It is learn-able, and it doesn't change its behaviour with an OTA-update you can't easily prevent.

More thought has gone into the UI design of even the earliest, rushed-to-market DSLR firmwares than some of the UI changes that Apple and Google have inflicted on their victims.


> I did – briefly – and am still recoiling from the horror.

Please explain.

> The software on even older DSLRs isn't terrible.

Oh yes it is. Like salt on a fresh wound.

> It does the job it was intended to do quite well. It is learn-able, and it doesn't change its behaviour with an OTA-update you can't easily prevent.

Its menus are completely inscrutable. But I'm not even talking about usability. One can learn anything eventually, given enough willpower.

I'm imagining a processor like the one on the iPhone/iPad Pro acting on all those pixels. And a true software platform running on top of it.


That's exactly the problem. If you put Apple or Google in charge, you'll get a true software platform acting on all those people. That's going to make everything slow and terrible.

The beauty of a DSLR is that you turn it on, and it's ready to go. Take pictures in raw and process them when you get home. If you want to adjust the exposure and what not, you can; if you do that a lot, you can get the higher model cameras with more dials, and customizable buttons so you don't have to go through menus as much.


I don't know about Android, but the Camera app on iOS is extremely responsive.

And I envy the amount of processing that goes on when you press the button. From turning a noisy mess from a fingernail size sensor into pretty good JPGs to auto choosing the best photo from crazy fast burst mode, even before you press the button!

And audio/video, oh boy. Hyper slow motion 4k, noise canceling…

Imagine what Apple/Google could do with something the size and weight of a DSLR and 3k US$ budget.


> the Camera app on iOS is extremely responsive

I love my iPhone and I'm happy to go to bat for Apple when I agree with them, but the Camera app on iOS is painfully slow compared to a Canon DSLR. The Canon can wake from sleep and take a photo in the tiniest fraction of a second.


I meant shutter release lag. Still, with both devices locked, the time from pick up to first shot won’t be that much different.

I’d rather have my Nikon, but I wouldn’t be in troubled if I needed a quick shot and only had my iPhone.


What do you mean locked? I can go from powered off to shooting my Canon in a fraction of the time than my iPhone from locked. And importantly a good photo because I can zoom and frame while the camera is getting the software ready.

Where the iPhone wins is when it’s already in my hands.


I don't have an X, bit my SE unlocks (fingerprint) in less than a second.

Also, you don't even need to. You can swipe from the bottom and pick the camera app from control center.

It's probably 1.5/2s from screen off to photo taken


I have an iPhone 7. I’ve never quite gotten the gesture right in the moment. It feels like it has some degree of spurious input rejection on the unlock screen.


I hate my iphone x camera because it's slow. I have to double and triple check that it's focusing on the right thing and hold the phone really steady. It is adequate for me because I always carry my phone with me, but I would never want a general approach to a real DSLR or mirrorless camera.

Apple/Google would add too many features, focus on wifi connectivity, and generally ruin the one amazing thing about cameras–the ability to turn them on and immediately take photos, no fuss. No waiting for software updates, or alerts, or any of the million other things that do not belong in a camera.


Double tap the power button on Android and it'll be ready to snap pics in 3 seconds or less. No popups or other dialogs.

Not bad.


That’s slow in DSLR land.

The 1DX Mark II (admittedly a very expensive pro camera) goes from physically switched off to first shot in 0.8 seconds, and from standby to image capture with full autofocus in 0.085 seconds: https://www.imaging-resource.com/PRODS/canon-1dx-ii/canon-1d...


Not good enough.

My D750 can go from OFF -> ON -> Shutter press -> focus -> picture taken in cca 0.5s in normal light conditions. There are so many moments that I managed to capture only because of this speed, they would be gone in those 3s.

Not even going into "details" like having 5x mechanical zoom on it (24-120mm), I can snap distant scene in 1s in above scenario. And I do snap those scenes quite often.


Switching to the camera app isn't the hard part. In iOS it's also very easy. I'm referring to the [taking the picture] part.


> but the Camera app on iOS is extremely responsive

Shutter response time on my Canon bodies is adjustable between slow and fast. Respectively 55 and 36 milliseconds.

I doubt any phone can even wake from sleep in that time.


> I'm imagining a processor like the one on the iPhone/iPad Pro acting on all those pixels. And a true software platform running on top of it.

I think a lot of professionals and prosumers who are still buying interchangeable lens cameras don’t care. Folks generally shoot in RAW and edit their photos on a PC with Photoshop and Lightroom, which affords people much more control and is much more powerful than whatever is in an iPad.

The only photographers shooting JPEGs are sports and news photographers, and they have specialized use cases. That’s why the pro cameras (1DX2 and D5) come with Gigabit Ethernet and auto-upload to an FTP server through that.

People want the pre-capture sequence to be reliable and easy to use (so lots of buttons and dials) and in general once the photo is captured and saved, they don’t care and don’t want the camera involved any more.

Of course, if people really want to upload to Instagram immediately, most cameras nowadays come with Wifi so they can just transfer photos to a smartphone and do whatever they need to do there.


Not all processing can be done in post. A lot goes on before RAWs are recorded that could benefit from faster hardware and better software.

An iPad Pro has one heck of a CPU/GPU, rivaling high end laptops, where most edits are made.

Having something like that (and custom silicon, perhaps) dedicated to pro photo/video/audio is something we haven’t seen.


Do you know what is being done to the data between coming off the sensor and being saved to a RAW file? My impression is that other than dead/hot pixel mapping and some degree of noise reduction, not much else is done (at least in Canon-land).

Even if the camera can do more post-processing, I am not sure I want it to. I would rather the camera do less after exposure is complete, so I can get more control over it while editing.


At the very least there’s signal amplification (and possible highlight clipping) at higher than baseline ISO. Otherwise you'd get a very dark image indeed.


Ah of course. I assume that is done pre-ADC though, so it is part of the analog circuitry on-board the sensor?


> Please explain.

Not parent, but FF-lust is objectively bizarre.

Read this (now very old) rant on the subject [1]

Then consider advances in optics, CAD, sensors, and then marvel at why some people are desperate for an arbitrary sensor size that plucked out of the air more than a hundred years ago.

[1] http://www.digitalsecrets.net/secrets/FullFrameWars.html


I’m very much aware of it, believe me.

Nothing spatial in 35mm, it’s just that we have a ton of amazing glass made and bought to go with it.

And even though there are diminishing returns, quality wise, the more photons the better.

I’m sure they could make APS-C, Micro 4/3, 4x5”, to please everyone. I wouldn’t complain :)



You mean the first successful consumer digital camera?


Apple is allergic to buttons so I can't see them producing a camera with a real user interface (knobs and rings and so on)

Leica T series is probably pretty close to what they would come up with tho. Solid block of metal, and nothing but screen on the back.


You can't work like that. I have a 5D and a Sony RX-100, and, given that I've taken good enough photos with my phone, the differences in the sensor are relatively minor. The UX differences, however, are light years apart. It takes me tens of seconds to change the settings like I want them on the Sony, whereas on the Canon they're a knob away.

I'm pretty sure that the major difference between a pro and a consumer camera is the amount of easily accessible buttons and knobs.


>I'm pretty sure that the major difference between a pro and a consumer camera is the amount of easily accessible buttons and knobs.

Precisely this. After 10K shots on a Canon 1200D, I upgraded to an 80D specifically to have access to the top controls to quickly change settings while shooting. You can remap a lot of the 1200D's controls in the "Expert Feature"/customizations section, but the 80D's extra ergonomics are a true joy. The camera is molded to my hands and comfortable to wield for hours at a time.

I can't say the same for the smartphones I shoot with. My Blackberry KeyOne's camera has fully manual controls (even the focus distance can be manually set!), but the "slide your finger to control settings" UI sometimes feels like the "Reddit designs volume control" experiment from a few years ago. I empathize with the Java programmers, but it's extremely frustrating to tell the smartphone what kind of picture I want. It should go without saying that it's equally frustrating to be unable to hold the slab phone adequately still.

Reading people's passionate hatred for real cameras makes it quite clear that they aren't the target audience for a simple mid-range DSLR, let alone monsters like the 1D or Nikon D5.


Modern Apple is, and probably because getting rid of a moving part at iPhone scale is great for manufacturing and avoiding breakage

But they can make great physical buttons. That last iPhone home button was one of the best in the industry.

I bet Jony Ive and Phil Schiller love Leicas


If I remember correctly, Steve Jobs compared the iPhone 4's design to a Leica.



While I disagree with your original point that Apple/Google should get into the camera manufacture space, Zeiss is working on a new "next generation" camera system. It's targeted at the RX1/Leica Q market, but with significantly better software and obviously great optics.

https://zx1.zeiss.com/


I've basically given up on whole DSLR thing after investing in it too much precisely because of this sorry state of their software. My phone camera that is 10X lighter in weight and size has far many more features I need than these stupid DSLR firmwares. Yes, quality isn't that great if you are one of those pixel-level scrutinizer but you can't beat getting 240-FPS video or 270-degree panoramas so effortlessly. The only reason crappy DSLR firmwares are surviving is because they offer lots of low level knobs that people with huge amount of patience and time can put to some good use.

I have never understood why giants like Cannon and Nikon can't match feature set, ease of use etc from smartphones. In many ways this is playing out like it did with Kodak where they set on their traditional things for so long that there was ultimately no point of return.


You can't seriously compare a tiny phone camera with a full frame DSLR. You simply cannot take certain types of photos with your phone due to optics and the size of the sensor.


I think slowly they will get there with phones. Camera arrays with 3-5 sensors will be common on phones in 1-2 years.

That way you get to keep lens sizes very small, but your effective sensor size is quite large, and you can do a lot of other cool computational photography tricks with multiple sensors.


I think so too, but we could get there sooner and better with big, high quality light gathering goodies.


yeah, I would probably buy one. But perhaps most people don't feel like they need a dslr since phone cameras are getting so good, and dslr's are quite large and heavy. With a full frame sensor your glass is going to be large/heavy


I have the same thoughts. Why do I have to relinquish all the niceties I'm used to from my phone in my 3000 € camera? What could be done if we had the computational power from phones combined with the high quality data from a big sensor?


Honestly, I think that would be an awful downgrade. The software on, say, a Canon camera is fast, reliable and easy to use. Magic Lantern is "just" about adding even more advanced advanced and/or obscure features.


>The software on, say, a Canon camera is fast, reliable and easy to use.

I think this is Stockholm syndrome speaking. Most photographers have no problems installing apps and upgrading their phones. Yet ask them to set a timer, change focus/metering point and it's a nightmare. Let's not even talk about Canon's camera control/export app, whatever it's called. Software is hard.


What photographer can't change the metering without even looking at the camera? It's literally the thumb knob.


I’m taking about Matrix Metering, Center-weighted or Spot Metering.

I deal with photographers in their 50s/60s weakly. They all love their phones and hate the menu screen on their Canon/Nikon/Sony. You set it up the way you like it and hope to never need to touch it again.

Time how long it takes you to find and change the Exif copywrite on your photos, daylight saving, or how to save JPGs in one card and RAWs in another. It’s a nightmare.


http://www.kenrockwell.com/canon/images/80d/810_9029-top.jpg

https://www.ephotozine.com/articles/canon-eos-1d-x-mark-ii-v...

This might blow your mind, but professionally-oriented cameras all have dedicated buttons to toggle metering modes without looking away from the viewfinder.


The 5D has twice the functions on the button, even, since pressing the button and scrolling one wheel does one thing, and the other another. It's very handy.

In comparison, I was trying to set bracketed exposure on my RX100 yesterday, and I still haven't managed to find the feature.


You should drop the condescending tone toward people who have more years of experience photographing professionally than you have of age.

I can operate my Nikons blind folded and single handed. Canons are not my cup of tea, but I can get by if needed to. Sonys, on the other hand, have amazing sensors and great glass, yet bury such important features under menus requiring pressing the Fn button while staring at the LCD, or something like it.


> You should drop the condescending tone toward people who have more years of experience photographing professionally than you have of age.

You might want to avoid making arbitrary assumptions about anonymous debaters, who may just as well be far more experienced than you, especially when said experience is not particularly relevant to the discussion of easy operation.

It's not interesting to know if someone can efficiently navigate an instrument after 25 years of experience with it. At that point, quirky UX ends up entirely concealed by muscle memory, and such person ends up being a poor source of input for UX matters.


>You might want to avoid making arbitrary assumptions about anonymous debaters…

https://keybase.io/eirik

>…who may just as well be far more experienced than you

I wasn’t referring to myself. But to photographers in their 50/60s.

> It's not interesting to know if someone can efficiently navigate an instrument after 25 years of experience with it. At that point, quirky UX ends up entirely concealed by muscle memory, and such person ends up being a poor source of input for UX matters.

Agreed. I made the same point a few comments above.


> https://keybase.io/eirik

I can't tell how many years of experience someone has from a profile picture. Can you?

> I wasn’t referring to myself. But to photographers in their 50/60s.

You worded it such that it referred to yourself. You can't argue using other people's experience.


> I can operate my Nikons blind folded and single handed.

What's the issue, then? The Nikon shooting interface obviously works well for you, and I think that is far more important and used than changing daylight savings settings.

> Time how long it takes you to find and change the Exif copywrite on your photos, daylight saving, or how to save JPGs in one card and RAWs in another. It’s a nightmare.

I don't know enough about Nikon or Sony, but I use Canon and I know where all those settings are. Even if you want to search the web to find where they are, I don't think it is a big deal: most of those settings you listed I change once or twice a year.

In any case, I think it is better than the settings app on my iPhone, where the General sub-menu contains everything from controlling Background App Refresh (Why not in individual app sub-menus?) to enabling iTunes Wi-Fi Sync (there is an iTunes and App Store sub-menu too).


As I said elsewhere in this thread, better Menu UI would be a side benefit. I’m craving for fast processor and powerful software on the back of a big sensor and glass.


What do you want the fast processor and powerful software to do, though?

For instance, I can't imagine trying to cram the Lightroom UI into a 4" screen on the back of my camera is a good experience, especially when I have a 30" 4K screen at home to edit on. To me, the current Lightroom for iOS is nowhere near as usable as the desktop version, and still quite a bit slower.


>What do you want the fast processor and powerful software to do?

What they already do in smartphones, but better. Almost guarantee you’ll never miss a shot (no blinks, blur, bad exposure), better signal processing, etc

> I can't imagine trying to cram the Lightroom UI into a 4" screen

Me neither. I’m not proposing that in the slightest


> What they already do in smartphones, but better. Almost guarantee you’ll never miss a shot (no blinks, blur, bad exposure), better signal processing, etc

OK, but that seems to be turning the camera into a point-and-shoot and I am not sure most of what is left of the ILC market wants that (or maybe it is just me). For me, I would ultimately want to control things like exposure, shutter opening, etc... myself (who is to say a "bad" over/under-exposed photo or some blur won't make for a better photo?).

I think people who want a point-and-shoot like experience are by-and-large are happy with smartphone cameras today (especially with advanced processing like Android's Night Sight), and they are not going to carry around a bulky ILC just to take photos.

In any case, I think Zeiss is trying to do something similar to what you want with the ZX1, and I will be interested to see how large that market is.


>OK, but that seems to be turning the camera into a point-and-shoot

Not at all.

>For me, I would ultimately want to control things like exposure, shutter opening, etc... myself (who is to say a "bad" over/under-exposed photo or some blur won't make for a better photo?).

100% agreed. But if instead of “shit, I missed this shot” I could have 3 taken before I even pressed the shutter, and one of them is perfect, I'd take it in a blink. Besides all the pie in the sky stuff that I imagine could be done as well.

>I think people who want a point-and-shoot like experience are by-and-large are happy with smartphone cameras today (especially with advanced processing like Android's Night Sight), and they are not going to carry around a bulky ILC just to take photos.

You're probably right. I don't know if it's a viable market, but one can dream.


> 100% agreed. But if instead of “shit, I missed this shot” I could have 3 taken before I even pressed the shutter, and one of them is perfect, I'd take it in a blink. Besides all the pie in the sky stuff that I imagine could be done as well.

Sounds like what you might want is a 8K video camera with a ring buffer that gets flushed to storage whenever you press the shutter button :p


On the 5D, press the metering button and turn the knob.

About the rest, I've never timed it because I've never needed to do it. Lightroom does most for me, and even the card thing would be roughly a one-time thing. I don't know what a comparison between a 6" touch display on a phone and a 2" non-touch display on a camera is supposed to show, but the camera menu isn't that hard: press menu, scroll to the appropriate section, select menu item and change what you need. It's pretty much the same on phones as well.


I'm trying to figure out the subset of people who are saving RAWs but relying on the camera's EXIF data, when Lightroom et al are perfectly capable of setting all the metadata right there.

Get into the enthusiast / prosumer models and there's more buttons for a reason. I want to change metering on my 5D4? I press the metering hard button on the top of the camera.


On the other hand, all you can control on an iPhone with the stock app is spot metering following the focus point.

The reason people dislike menus is that there's a lot of things to control. It's nice without it, right until you need to tweak something.


I'm not really a photographer, but a software developer with gadget cravings causing me to have a few cameras of different brands.

Canon's smartphone app is quite terrible (although this is a slam at smartphone UX, not camera UX), but I can't really see smartphone apps for cameras as anything other than a useless gimmick.

However, that app has no relation to how it is to use a modern camera. Comparing the UI's of iOS, Sony's Android camera app, Sony's A6000 and Canon mirrorless/dslr, Canon's modern mirrorless UX is by far the best. Easiest to use, prettiest, etc.

For everything an iPhone can do, Canon works exactly the same, but then Canon also does so much more. Focus point is by touch screen, and it's much more responsive than my phone. It also seems much smarter with regards to tracking. Important settings are in the HUD, rest in an quick menu one touch away.

All of this is of course subjective and anecdotal, but I would by quite saddened to end up with smartphone-d cameras. I do not see it making any positive improvement.


This tread got a bit out of hand, so I'll try to summarise my thoughts here:

I find computational photography incredibly exiting, but it's mostly taking place in smartphone space, with tiny sensors and lenses. Imagine what we could do with great glass and a huge honking sensor.


The hardware is not that great when it comes to connectivity. canon is behind on remote control via wifi and only got usb3 support in the latest models.


I agree that they can learn much. When was the last time a camera software update included new features....


Really? A general purpose OS, to replace the firmware and physical controls honed over decades and used by professional photographers in all kinds of scenarios?

You think that highly of smartphone makers?

Would you want them to re-make the aircraft cockpit too?


>A general purpose OS, to replace the firmware…

Yes, please. And a kickass processor with crazy fast memory to go along with it.

>and physical controls honed over decades and used by professional photographers in all kinds of scenarios?

Not at all. We love knobs, rings and wheels with great distinguishable tactile feeling.

A more intuitive settings menu would be a side benefit.

Truly revolutionary would be to able to write first class software to a mature capable OS.

What could the state of the art in demosaic, noise, latitude, color accuracy be with that kind of speed and tools?


So in other words, you don't mind if your camera's battery drains mysteriously.. Let's get a battery discharge graph in there so you can debug the process that's killing the battery!


Does this happen to your phone?

Mine is a supercomputer that fits my pocket with battery that lasts all day.

I trust them to make something that can be at least 10X thicker and heavier to performer comparably.


I suspect you didn't mentally contrast the stringency of power management on a professional level camera vs Android, iOS devices. "Trust them". Lol.. go google battery drain, and see what these modern OS permits..

Last all day means nothing in the context of what you brought up.


Hahah, that's funy. The cell phone is potentially the perfect battery drainer. Constantly screaming electromagnetic radiation in lot of frequencies, including through that huge screen, console level multicore CPU/GPU… and yet, they manage.

I don't follow Android's ecosystem much, but battery usage on iOS is pretty restricted and very much under control. It can be done.


> What could the state of the art in demosaic, noise, latitude, color accuracy be with that kind of speed and tools?

Do you shoot JPEGs exclusively? I shoot RAW, so the only thing I care about post image capture is being able to preview my photos, show a histogram, zoom in to check for focus accuracy and delete images. Everything else I can do on my desktop, which will always be much more capable than whatever I can put on-device.


Almost exclusively RAW, but sometime shot both at the same time.

But even RAW is not as “unbaked” as you might imagine. Specially in higher ISO


Sure, but that is mainly noise reduction and some dead/hot pixel mapping isn't it, and some manufacturers (Sony) do more of that than others (Canon/Nikon). Demosaicing, histogram adjustments, and the final render to JPEG is still done on my PC when I shoot RAW.

In other words, what improvements do you think camera manufacturers can add to the post-capture RAW workflow if we had a faster processor/better software?

(Of course, in pre-capture, there is always room for more improvements to metering and autofocus, but camera manufacturers know this and are constantly making adjustments and improvements each generation.)


Better signal processing, amplification, read noise and probably lot more that I can’t even imagine.

And of course, all the stuff during capture you mentioned could be a lot better too.

Picture being a camera engineer and suddenly have 10X more speed and a decent platform to work with. 32bit multiple exposure RAW in a single click, who knows.


> Better signal processing, amplification, read noise and probably lot more that I can’t even imagine.

Yeah, I get that, but I don't think there is an improvement here by sticking a modern smartphone CPU and OS into a camera.

Making improvements to those metrics ultimately require newer sensors with lower read noise and better on-chip ADCs. The camera manufacturers can't do 24-bit RAWs @ 20fps when the sensor only outputs 12-bits @ 10 fps.

If we are doing multiple exposures, then as you know, cameras already have exposure bracketing, and I can do the rest in my PC.


>Making improvements to those metrics ultimately require newer sensors with lower read noise and better on-chip ADCs.

Which big pocket tech companies could do :)

>If we are doing multiple exposures, then as you know, cameras already have exposure bracketing, and I can do the rest in my PC.

Sure, but we have gyroscopes, accelerometers, electronic shutters and 2Ghz processors on this dream device of mine. Surely we can do better than using a tripod, exposing entire scenes multiple times and try to align and blend them in the computer hours later.


> Which big pocket tech companies could do :)

Yeah, unfortunately I don't think big tech is that interested. Unless it is a strategic move, even at $1b/year in profit, it is not worth Tim Cook's time to do this ($1b/year is less than 2% of AAPL profit). I am not sure there is $1b/year in profit to be made across all the ILC manufacturers at this point.

> but we have gyroscopes, accelerometers

These are helpful and I wish cameras would record these and stuff them into the EXIF data somehow, so that I can do better post. I think we just have some disagreements about where post-processing should happen :p


Seems like you could embed camera firmware either in the sensor or in iOS/Android. Maybe not.


the dude's talking about terrible software from camera makers, and want iOS or Android software to power his professional camera.


Also, for non-DSLRs: http://chdk.wikia.com/wiki/CHDK


CHDK, with its built-in LUA scripting, is incredible. I've used it to outfit cheap (<$30 USD) point and shoot Canon cameras for time-lapse.


Why the hell are they capturing keyboard inputs of the q, v, m, l, and p keys and updating the URI w/ navigational things? Makes CMD+L useless...

Looks like the are supposedly emulating some of the camera's functions? I don't know enough about this camera for it to make any sense to me.


Looks like they were trying to create a mock of the user interface that's fully functional in the browser. It's really cool, great to demo, and they probably wanted to put it in the URI so you can share a link to a specific page. Maybe they even use the same UI code for both browser and the embedded system?

But in practice they implemented this poorly. They could update the URI with replaceState so you don't break the back button, and ignore keyboard shortcuts if a modifier key is pressed. One of the reasons I like projects with open-source websites - you can just file an issue on the website itself!


5DMkII owner here. I love ML and have been a canon user for over 20 years. But I recently switched to Sony as have many other Canon users I meet because Canon is not innovating as well as Sony. Better sensors, features, ergonomics and capabilities. The A7Sii is a thing of beauty. Includes many ML features built in like focus peaking, zebra lines, etc. And you van remove the 30 min limit on recording via a hack so no need for MLs features that try to help with that.

Also Sony make sensors for many other camera companies including Nikon.


My (relatively) inexpensive Panasonic GX-80 (GX-85 in some markets) has those features. How are Canon so far behind?

The biggest issue with the GX-85 for film is it's lack of audio in, that can be circumvented with a seperate audio recorder but it's certainly not perfect.


I'm surprised Magic Lantern hasn't been sued out of existence by Canon. Canon is notorious for holding back simple software features from entry-level cameras to price differentiate. Customisable firmware is a threat to their business model...

For the same reason I'm curious that Sony is not more open - their strategy appears to be the opposite of Canon, throwing in the kitchen sink of features into their entry level bodies. And yet they have removed tha ability to load Android apps onto their latest cameras...


Conversely, Magic Lantern is a reason for consumers to pick Canon over Nikon or some other camera brand. I imagine there are also at least some differences in the quality of sensors between high-end and lower-end DSLRs?


Why .fm? I instantly felt confident this was internet radio related.


Firmware perhaps?


QMK firmware do it too: https://qmk.fm


Yeah, I was happily expecting an interesting radio station, and instead got a disappointing SPA about camera software :(


Started by Trammell Hudson of Thunderstrike fame (and many other interesting projects), see https://trmm.net


If you're new to Magic Lantern and need a getting started guide, here is an in depth tutorial I created :)

https://www.youtube.com/watch?v=LHzQkJNMIzU


To repay you here's a little random tip, because the sound on that video is kinda boxy (with my headphones anyway), and this helped me greatly (to be honest, it kinda blew me mind how much this can fix)

With a sample editor that can use VST/AU effects, and the free version of this, or something that has the same functionality: https://www.tokyodawn.net/tdr-nova/

1. take a frequency band, put the Q for it to max or near max, and gain to max.

2. sweep the frequency until you find the one that makes it sound extremely boxy (it's probably below 1k but don't take my word for it, you will hear it when you found it however)

3. now set gain to zero, activate the threshold button, and dial the threshold to a value that just makes the boxyness disappear without making it sound too thin

4. repeat with more frequencies as needed/wanted

5. compress or normalize


What you're mostly doing there is working out the resonances of your room/speakers/headphones. Sorry to say it, but this is terrible advice on how to apply corrective EQ.


So, what's a better way? Let's see your not terrible advice.

> resonances of your room/speakers/headphones

Yet the frequencies that stand out depend so much on the input material, and arent't the same between recordings in different rooms or with different mics, even with the same playback setup. Weird.

As the manual of that VST so terribly states

> This is an excellent tool to correct a boxy low-endsound, even out resonances in a recording or reduce excessive sibilance in a vocal part.

Sure, they don't outline this specific approach, but others do, sadly:

https://www.soundonsound.com/sound-advice/q-how-do-pinpoint-...

> In my experience visual analysis won’t usually help a great deal here, and you have to adopt more of a ‘hunt and peck’ approach, sweeping a narrow EQ boost around the spectrum and then placing an EQ cut wherever the boost sounds most unappealing

And it just goes on and on :(

http://indierecordingdepot.com/t/how-do-i-find-resonant-freq...

Of course, since you said it's such terrible advice, my ears noticing the results are probably lying to me, and I can hardly wait to find out how to do it correctly at last.


If you are going to use that technique, don't use extreme Q settings, because as I said you are mostly going to be finding resonances that aren't actually in the signal, but are part of listening in your specific environment. Granted you will find some of the problem frequencies from the source, but with a much lower Q you're at least less likely to find stuff that isn't. EQ should be applied gently, if you require extreme settings then you have a shitty source material, take some steps to correct that. There are no pro-spec EQ hardware units that offer such extreme Q settings, because they are next to useless except in extremely limited circumstances.

Just because something is repeated a lot doesn't automatically make it a great idea: see "Brexit means Brexit" or "Build The Wall" for examples.


Thanks very much, I'll give that a crack next time in Logic - audio EQing/compression definitely isn't my strong point :)


Hey, thanks for this. I watched this video a few months ago when I was getting started with Magic Lantern. Nice work!


That's great to hear, cheers!


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: