As much as this is important, I am convinced Open things are most useful at the lower end, opening doors to people who would otherwise just not be able to participate.
I would have a hard time justifying getting one of those for myself, because of the price, as a non-pro.
Still, I hope they succeed.
I never liked hacking cameras that were not made to be hacked, but this one is. It’s not the first, but the other projects appeared to be barely getting by.
The sad part is just how long this has taken to come to fruition, and how little sense it makes economically at this point.
For one, many of the problems this solves in a holistic and better way have found standard solutions widely implemented in the indie production market.
RED cameras are fairly prolific and accessible to a lot of people these days, and other imperfect solutions are popularly adopted as well.
While this option is all-around more capable, the modding workflow is still foreign and will take some time to adapt.
Purely economically, there is a lot less money flowing into indie production outfits, particularly from advertising and indie entertainment enterprises like Youtube-online media channels. Advertising is embracing data based decision models that override costly creative production efforts that would properly utilize a camera like this.
In short, this thing is for artists, and artists are screwed.
To those asking why it doesn't have AF: no cine camera has AF.
In regards to lens selection; the camera seems to be modular so there shouldn't be much stopping Axiom or yourself from getting a different lens mounting plate.
Having an API for the lens and aperture would have opened it up for developers to use these creatively, such as focusing on one point in the frame and then zipping focus to another point using easing functions.
Something like http://www.rtmotion.com/lens-control-system or https://cinegears.com/product/cinegears-single-axis-wireless...
Not sure about aperture adjustment from the cam body itself; usually, not always though, cine lenses are constant aperture primes or zooms.
How about the https://en.wikipedia.org/wiki/Canon_EOS_C300_Mark_II with dual pixel autofocus?
When it comes to cinema _movies_ (while it's true the EOS C300 Mark II has dual pixel AF) there will be a focus puller.
Relying on AF will end up biting your ass on a film set. Auto-focus doesn't know about the aesthetic of the film, only that something (a face) is in focus.
On (movie) sets the cinematographer will be making those decisions, not AF; while it does, "...[enable] the user to see if they are focused in front of or behind the subject through visual observation, [allowing] quick and accurate manual focus to be achieved" it should only be used in movie making as a tool and not a crutch.
You're technically correct on the existence of AF in the C300 MII.
Edit: Cine cameras are beginning to delve into AF but it is not replacing a focus puller anytime soon.
By the time it's out, you could buy either a fully integrated solution that's far ahead technically - from the likes of RED, Canon, Blackmagic etc - or if you're after some exotic application you could just use an industrial camera from IDS or Ximea.
Yes the customization and the abilities around this camera could be endless, but it'd take something really special on that end to overcome to hardware limitations coming from the development model and to make it a viable professional tool.
In addition, the FPGA you call limited I'd argue gives access to powerful reprogrammable logic. Why hard-code image processing algorithms when you can update them as new techniques come out?
Not only is this an amazing sensor (180fps, global shutter) but it's using cutting edge technology. I'd argue that it's possible sensor technology is reaching some of its limits and an industrial sensor here can match the quality of DSLR sensors. It may only be 12MP but Google and Apple have both shown that intelligent algorithms can produce amazing results from sensors that size.
The video on the properties of the sensor itself is quite impressive: http://vimeo.com/17230822
What does machine learning have to do with photography? If there's an explanation for that, then I'd add why does it need to happen on the camera?
ML has thousands of applications in photography, from HDR to mask creation to feature removal to subtle aids like noise reduction.
Nowhere I can think of (other than Android, but a lot of caveat there) has an open hardware upstart challenged entrenched commercial players for legacy needs and won.
Where you do win is by targeting emerging needs, especially ones legacy commercial players are ill equipped to take advantage of (due to institutional inertia or ugly tech stacks).
As for why ML on edge devices: a large amount of work is going into running models (or first passes) on edge devices with limited resources (see articles on HN). I would assume they have business reasons.
But offhand, almost everything about vision-based ML in the real world gets better if you can decrease latency and remove high-bandwidth internet connectivity as a requirement.
It's a neat idea. Your implication that it's good-enough doesn't scan to me, as even a prosumer videographer. Maybe I'll be wrong--but Blackmagic is already vastly ahead in the pure-cinema space, so...
The Axiom camera doesn't appear to be in the same ballpark as those other cine cameras, either, which is the disjoint in here for me. It's trying to make its bones on being "hackable" instead of "a great camera". I'm not sure there's that middle ground, unless it happens to come in significantly cheaper than the BlackMagic stuff (which is my next upgrade, they're fantastic).
But for video and video specifically, a "hackable base system" is not a raison d'etre to exist beyond the niche-of-a-niche hacker community who's not going to pay you materially for the thing in the first place. This stuff is expensive! Near as I can tell, something like this needs a reason for videographers, and not computer people, to pay for it to get the traction needed to allow it to expand and progress. The set of people who need to "add things" to a video camera for their use case is small because there are few use cases that are not better undertaken by getting the highest quality, cleanest raw footage you can (which is not helped by adding, like, inline color correction), storing it in the least-lossy format you can, and then taking that output and transforming it down-the-line either in post/editing for recorded media or in your better-equipped visual mixer for live.
The most interesting "hardware hacks" I can think of for cameras would be in the focus/image stabilization arena, but I don't think you get much insight into that with the passive E-mount (could be wrong, I don't use Sony) as opposed to an active E-mount or Panasonic's Power/Mega OIS stuff in the MFT. For Power/Mega OIS in particular, my understanding is that the sensors are in the lens but the brains are in the camera--could be some novel things to do there.
Good stabilization techniques don’t happen in the lens and passive E mount is perrrfect for this camera.
Stabilization is a mount’s job, and not really a hack. Most camera hacking these days is done with SLRS which are not “real” video cameras, for lack of a list of technical explanations. Magic Lantern is a popular DSLR hacking tool and it’s problematic to use. Think trying to use Windows as if it were Linux.
I would, however, contest that stabilization is just the job of the mount--optical stabilization relies on the lens, and it's pretty valuable to me. I've broadcast with both Panasonic's Power OIS lenses and non-OIS lenses and the difference is pretty stark. (And given that live broadcasts pretty often just use Panasonic GH's or BlackMagic's Pocket or Micro cameras, it's basically the same tech and thus something to consider.)
Further edit: 'bprater made a good point elsewhere in the thread: an active mount would allow for driving focus through software, too. I don't mean autofocus, but rather the ability to automate focus. For example, I wrote an application to control my video mixer to better be able to do a one-man show and my GH3/GH4 have decent mobile apps for one-man control. Being able to have the camera remember correct focus settings when I'm moving between standard spots in my studio would actually be a pretty useful thing to be able to trigger without getting behind the camera! Which, once more, goes back to "hey, so who is this SSH-capable, 'hackable' camera actually for?".
It's for me, and other nofilmschool.com readers. But, I'm not going to buy it. I've awkwardly transitioned to using CGI as my medium, but if I were still using camera, I might be considering a purchase of one of these.
It's the latest in a chain of attempts at upending the big companies who lock features on the cameras and have single handledly held back indie film production by miles, for years.
None of these attempts really get off the ground. RED gave the sight of being a savior many years ago, but RED's plan all along was just to bit off the opportunity and ultimately side with the camera nazis.
As for the mount, I never even learned to use focus electronically controlled by the body, or zoom, or stabilization. I would have no use for any of them. Passive E-mount does everything I could conceive of needing.
When somebody who this camera is for wants to control focus wirelessly, they rent a Preston system or one of the newer ones like a Lenzhound.
I've been following this project for a few years and while it's got a slow development/adoption curve it's on the right track both economically and technologically; eventually it'll become the Linux of cameras. The thing is, we're already in an era of technological abundance where digital photography is concerned; commodity solutions are already good Enough for pretty much all commercial purposes. You can shoot an indie feature on an iPhone; even huge budget films like the Avengers franchise use Blackmagic pocket cameras as crash cams (cheapish cameras that you set up to run automatically stunt cars to provide a second or two of dramatic first-person footage).
There will of course always be more highly engineered offerings out there, that will be important for highly specialized tasks like astronomy and scientific work. And ultradense sensors might be deployed to allow photography at multiple focal depths - but even that is likely to find its way into consumer gear before lone. At this point the industry focus is no longer on which camera has the best specifications as on workflow and support. Blackmagic ave done incredibly well for themselves with a (relatively) technologically inferior sensor/package but by being more open than Red and thus building a larger and stronger community at a somewhat lower price point, for example. It's a good example of the tortoise beating the hare over the length of the race.
You're also giving up a huge compromise in lenses when going with this approach. When it comes to photography lenses mean more than the camera 95% of the time(unless you're doing sports/action photography, where you need good AF).
With this it's all manual + adapters which always has compromises compared to use in their native format.
It's super-cool technically but I don't see it taking over anytime soon.
Also, being able to adapt lenses is a huge bonus. Look at the Micro Four Thirds market. There are so many adapters available and in wide use to be able use a wider selection of lenses. Pretty much the only way Sony cameras are used. The selection of Sony native mount lenses limited, but adapt that to a PL mount or EF mount, and the world opens up significantly
I've got a non-trivial amount of Canon gear(300F4, 35F1.4, 135F2, etc) and looked into the Sony route since their A7 line looks killer. In the end all the adapters had compromises so I decided to stick with Canon. From all reports the AF drive on the adapters still have a bunch of edge cases.
They are. I recently tried out an A7S, nice thing, but getting the A7S2 in about two months. 4K in camera recording is worth getting the Mk2 for.
However, as someone that has shot a lot of video footage with no light other than a full moon using the A7Sii, I highly suggest getting the cable connected remotes for this camera. Trying to fly this body without a remote on a shoulder mount with lenses, follow focus, monitors, etc is brutal. The start/stop button is so tiny and hard to press, it makes you want to throw the entire rig as far as you can. The button is more along the lines of a reset button that you need a small pin to press.
It's pain points like this that make a stills camera that can shoot video much different than a true digital cinema camera
The Axiom angle is more cinematographically oriented.
> from the likes of RED, Canon, Blackmagic etc
Guess which sensor in in some of BlackMagic's offerings?
I'm not aware of any pro cinema camera using the CMV12000 as-is, but I've been out of the loop for some time.
I'm pretty sure the Blackmagic used CMOSIS sensors at least on the 4K Production Camera.
On Axiom - I think it's great to have open source cameras like this, but the price point needs to come down significantly before it will get traction. Also, with indie camera companies expect long delays - Axiom's been years in the making, as have KineFinity cameras (another independent RAW cinema camera brand).
From a practical POV for indie filmmakers, the resale value of these cameras is also questionable when compared to BlackMagic or RED.
I think much of the benefits of RAW (dynamic range) will actually come from 10bit HEVC cameras shooting HDR. GH5 is one of the first of those, in 1-2 years I expect every smartphone will shoot 10bit HEVC that will have significantly more latitude than current 8bit video, even at small sensor sizes.
I would actually love to see mobile SoC's like Qualcomm 845 (4K 60fps, 10bit HDR, Rec 2020 color gamut) and Android used on semi/pro cameras, with a good OLED touchscreen, relatively large (1" or larger) sensor and maybe m43 mount lenses. Something like BMPCC but with updated sensor and chipset, running Android.
Plus its (user) hardware/software upgradeable, some fancy new thing comes out you just have to bust out the toolkit instead of having to buy a whole new camera.
I'm thinking something like a light-field sensor with the "LSD simulation" firmware would be a perfectly sensible upgrade.
Though, I have to say, that sounds like an anti-feature -- instead of just taking a few shots they'll just take all your photos and throw you in jail for espionage because "if you have nothing to fear you have nothing to hide."
Encryption isn't being pursuing as closed protocols constitute what the camera is designed to steer away from, but it may be relatively straight forward to implement where required, eg. CryptSetup and Linux Unified Key Setup.
I shoot hockey with a Canon 1DX II, before I got that body I used a 5DIII. If I'm shooting for me, I use a Canon 200mm f2; if I'm shooting for the pros, they want more like a 300mm so I add a 1.4x teleconverter.
For the people who don't know what any of that means, this might help: the 1DX II is Canon's best sports body, it retails for about $6000. The 200mm + the 1.4x is another $6000.
So truth in advertising, I've got a lot invested in my current kit (not just that stuff, I have a number of other Canon lenses, some Sigma, Rokinon). So perhaps I'm not objective.
All that said, I don't get this camera at all. No auto focus, no viewfinder, those are complete deal breakers for me (electronic viewfinder doesn't count unless it is 100% as fast as a normal view finder. I'm timing shots so I get the puck going into the net, that means a lag as small as a few milliseconds screws me up. And when I say "me" I really mean any sports photographer, or action photographer where the work flow doesn't allow you to use burst to hopefully get the right shot by accident).
It looks like lots of cool technology but I'm definitely not interested in owning one.
So who is interested in owning one and what would you do with it? Where does this camera shine?
The reason for a camera like this is to open the tool set up for active development, in a way that traditional camera-makers haven't opened their hardware up for access.
Here's an example: high-ISO shooting. What makes it possible for cameras like Sony's a7s series to shoot in nearly dark conditions? Is it the sensor or have the engineers leveraged the fast CPUs to do real-time noise reduction?
By giving engineers access to the hardware, we could start exploring high-ISO programming. Similarly, we might learn how to auto-calibrate lenses in new ways that could take the cheap 'nifty-fifty' lens, apply machine learning and have it perform like a $3k Zeiss lens.
Even a topic like color science, the holy grail of company's knowledge base like Canon or Alexa, could be explored by a wider audience of scientists and engineers. Until we can get our hands on the hardware thru code, most of this is nearly impossible, except projects like Magic Lantern.
I suppose they think they have secret sauce buried in there but by keeping it secret they aren't getting any patches from us hackers.
Mostly because the A7S(2) has a 35mm sensor with "only" 12 MP - it's simple physics, the individual pixels are so huge compared to 50 MP+ cameras and thus much less susceptible to noise.
In addition the A7S line, during 4k recording, does not do binning or other quality degrading post processing (because its resolution is so "low" that 1:1 4k can be done). This reduces processing load as well.
For computer vision cars-vehicle automatic driving. We want to preprocess data in real time with the information provided by sensors(vehicle inertial sensors, tacometers and stuff). What is called "Sensor fusion", something very similar to what humans(or any animal) does.
We already use commercial cameras, but the ability to integrate directly our hardware(our own FPGAS, in the future ASICs) with cameras low level is simply very tempting. In software we use Linux for the same reason, there is no way you could integrate that much with proprietary software.
We need real raw data. Not raw data already preprocessed by the manufacturer, and total control over it, specially lighting. A vehicle is moving and if the preprocessor could not handle the change of lighting entering-exiting a tunnel fast enough on a bright day people could die. You need total control and stability and repetitivity.
Without this we will have to design all by ourselves, which is very expensive.
It looks like they are focusing on cinema so we are not sure about this, but it could be a very interesting possibility to explore.
It is possible to partner with sensor or camera makers, in order to get the required specs and level of integration, but it is extraordinarily expensive. And only a handful of companies can do this. For the individual, it's out of the question to partner with a camera maker. So for the interested individual, AXIOM provides a real benefit. And for commercial development, companies like FLIR Integrated Imaging Solutions (formerly Point Grey Research) exist, but still lock down drivers and control firmware. And you can typically only afford to partner with one or two camera manufacturers, whereas in this case all that overhead is gone and you can just use the device directly.
It's a small market, but if you're in it, this kind of project is a great development.
One tip I picked up early (especially for something like sports where if you've missed the shot, the moment is gone) is to just take more shots and not even think about sorting or deleting them during the shoot. I ended up with a lot more usable images that way.
The one exception was when my DSLR mirror broke and I suddenly had to fully manually meter and focus (which I'd done before but wasn't exactly familiar or comfortable with it). That was an interesting shoot! IIRC, I was having to shoot with a narrow aperture (not ideal as it was in a dark room) and compensate with a higher ISO/longer shutter speed, then check the focus after the shot, as I didn't have a working view finder to properly dial in the focus.
Blackmagic has been making a lot of people very happy, and pissing a lot of others off at the same time. By that, they must be doing something right ;-)
It seems to me that BMD feels that the market is made up of nothing but price-inflated "things". "Things" could be hardware or software. They started with their video cards and other video hardware converter devices, but it was their acquisition of DaVinci and the subsequent release of the Resolve software for free that really started the polarization of opinion. After that, they dove into the camera realm, and that really got people's attention. Granted, their first gen camera left a lot to be desired, but whose first gen anything doesn't? (I'm looking directly at you Nokia Ozo.) Once they had a decent imaging chip, they went after the film scanning realm.
Each of these areas (color correction, cinema cameras, film scanning) are historically all extremely expensive markets to get into. Blackmagic has "disrupted" traditional markets (to use a Si Valley buzzword), but without being a giant frat Bro culture.
TLDR: I love the Blackmagic gear too!
Looks like the original cameras, minus the pocket, are now EOL.
This is why you would want to be "chimping" the shots as you shoot them (or just know from experience if you got the shot or not).
Quite different from a more traditional workflow where you take a ton of images, classify them as keep/delete after the fact, spend time punching the keepers up in Photoshop/Lightroom, etc.
Edit: a downvote? It's true, see this presentation: https://www.youtube.com/watch?v=PyJ8PoASWdw
It's also useful for remote control -- there are scenarios where there is place to set up a camera, but an operator can't be there as well -- and for automatic downloads (e.g. in a studio setting).
I'm interested to see the implementation on the AXIOM, and I'm debating placing down an order.
If they could also produce a high quality set of prime lenses, that'd be nice. I think the real magic (and difficulty) lies in producing great lenses, which is an entirely analogue process.