The IMX296 is a nice sensor, but using it from a RasPi (assuming you could buy one...) may be a bit more challenging than you expect. Even though the frame rate is not especially high, the 1440x1080@60fps mono stream you capture does eventually need to go somewhere.
RAM fills up quickly and SD storage is just too slow for this kind of usage. Broadcom hardware JPEG compression doesn't seem to work too well either, so you'll quickly end up with more image data than you can handle.
Then, for lots of applications, you'll actually want to use multiple sensors. Even though the RasPi only has a single camera connector, you could use a breakout board to connect 2 or 4 cameras. Problem is: you most likely want these cameras to capture at the exact same time, so you don't end up with frames captured around the same time that still show an in-motion object at wildly different positions. And I don't see any provisions for hardware triggering with these modules.
The Nvidia SOC ecosystem tends to be a slightly better choice for imaging applications (more GPU encoding options and some provisions for camera sync). But most industrial applications stick with GigE camera modules for a reason...
Can’t you encode on the fly to h264? Ages ago I did this on an imx6 to get the lowest possible power consumption. It was interesting to learn that you save a ton of power if you can get raw data to the VPU for compression before it goes into ram. Then whatever gets moved around in memory after is drastically smaller.
For processing, all those points hold up—in terms of interfacing, you do have the option of the Compute Module 4, which has two CSI connections exposed. Still would like more power in the board to handle the video streams.
This is exciting news for applications that requires calibration or associating points between image pairs of fast moving objects. I wonder when we'll start seeing more global shutter cameras for automotive applications.
RAM fills up quickly and SD storage is just too slow for this kind of usage. Broadcom hardware JPEG compression doesn't seem to work too well either, so you'll quickly end up with more image data than you can handle.
Then, for lots of applications, you'll actually want to use multiple sensors. Even though the RasPi only has a single camera connector, you could use a breakout board to connect 2 or 4 cameras. Problem is: you most likely want these cameras to capture at the exact same time, so you don't end up with frames captured around the same time that still show an in-motion object at wildly different positions. And I don't see any provisions for hardware triggering with these modules.
The Nvidia SOC ecosystem tends to be a slightly better choice for imaging applications (more GPU encoding options and some provisions for camera sync). But most industrial applications stick with GigE camera modules for a reason...