
Libcamera – A complex camera support library for Linux, Android, and ChromeOS - executesorder66
http://libcamera.org/
======
catern
>...collaboration with the industry to develop a camera stack... protecting
vendor core IP.

Can't say I have any interest in protecting "vendor IP" in their crappy camera
drivers. Let vendors open their blobs! Closed source camera drivers will just
reduce quality and increase bugginess.

~~~
Jonnax
What's the point of this all or nothing attitude?

This looks like a move to getting better support. The alternative is getting
nothing.

~~~
jdnenej
Because all or nothing forces someone to compromise. By default almost all
vendors would pick to keep things proprietary. If going open source is the
only possible option then some vendors will release the source and if they
don't then hobbyists will write their own drivers since the work is very
useful.

Wikipedia faced a similar issue when considering using https everywhere. Some
countries were blocking certain pages on Wikipedia and https would break that
so it was uncertain if they would just block the whole of Wikipedia now. As it
turns out they mostly did not block the whole website and those selected pages
became unblocked because the alternative was given by the all or nothing
approach was too far.

~~~
izacus
> Because all or nothing forces someone to compromise

When exactly did that work? In which project? All this attitude did was to
make Linux unusable for many people, making it marginal and thus a smaller
power at pushing companies to support and opensource drivers.

Things started changing AFTER more people were able to adopt it due to less
black and white attitude of some distributions.

~~~
matheusmoreira
The Linux kernel is the best example of the success of such a strategy. The
kernel's driver interfaces are unstable by design:

[https://www.kernel.org/doc/Documentation/process/stable-
api-...](https://www.kernel.org/doc/Documentation/process/stable-api-
nonsense.rst)

[https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html](https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html)

[https://youtu.be/iiED1K98lnw](https://youtu.be/iiED1K98lnw)

There are technical reasons for this but it also serves another purpose:
maintaining leverage over companies that would take advantage. The Linux
kernel receives tens of patches every hour. If companies refuse to upstream
their drivers, they are left behind and must pay the maintenance costs or drop
support. Their products are worse for it.

I wish every free software project had this power.

------
ChrisMarshallNY
This is no simple thing to do.

The biggest issue is that "raw" Linux is not an RTOS.

When the photographer presses the shutter, you have a few milliseconds to
capture the image. You can't have the OS decide to prioritize receiving a
tweet before getting to the shutter release.

It's even more intense with video, and most still cameras are quickly morphing
into high-quality video cameras.

Also, image processing is REAL intense; especially with Computational
Photography. It can suck down your battery in seconds.

There's a few ways to deal with it. You could use something like Xenomai or
RTLinux, or establish some kind of SoC for the realtime-sensitive processing
(what most phone manufacturers probably do -Apple sure does).

It's a non-trivial task.

~~~
amelius
Curious, how does V4L handle these things?

~~~
joshvm
It gets a bit messy.

In order to capture an image, most image sensors are triggered. Some sensors
will operate in "free-run" mode, but in reality something is sending a capture
signal to the camera. In sensors where you have very low level control over
things, usually what that does is clear the pixels (dump the accumulated
charge) and start exposing. Then, when the exposure is complete, the camera
makes the data available for read-out (for a phone sensor analogue-digital
conversion is likely done in circuitry on the sensor, not in the phone).

That trigger is the important bit. Using triggering you can synchronise
multiple cameras (e.g. the sensors in multi-camera phones are almost certainly
co-triggered to avoid ghosting), illumination units, etc. In fact many camera
chips can also be set to output flash signals when they're exposing, this
eliminates latency in sending separate "capture" and "flash" signals. When
I've deployed machine vision systems, often we would use an external system to
trigger everything (e.g. a microcontroller). Interfacing hardware and software
triggering is a nightmare.

Then you've got whatever low level interface is actually communicating to the
camera. On a phone, that's probably MIPI/CSI-2. Modern SoCs have dedicated
(hardware) camera interfaces (the ISP) that are designed specifically to load
images from camera sensors and do some minimal processing. You need kernel-
level drivers for whatever your SoC is. These drivers will (in theory) let you
set up arbitrary image sensors with whatever resolution/timing they require.
But the key is you need two drivers: one to control the ISP and another to
send commands to the camera sensor. A lot of the time (cough Broadcom) you
only get a blob for the ISP and you almost never get full datasheets for image
sensors without being a big customer.

Rockchip have an open source driver for their ISP: [http://opensource.rock-
chips.com/wiki_Rockchip-isp1](http://opensource.rock-chips.com/wiki_Rockchip-
isp1)

See Rockchip again for an open-source example of an actual sensor driver.

[https://github.com/rockchip-
linux/kernel/blob/release-4.4/dr...](https://github.com/rockchip-
linux/kernel/blob/release-4.4/drivers/media/i2c/imx219.c)

Want to make an educated guess why libcamera supports the [Rockchip] RK3399?
They make it comparatively easy.

In the end V4L acts as a mediator between this low level interface and other
things which want to talk to the camera. Rockchip have provided what amounts
to a plugin for V4L. Then on top of V4L you have libraries like OpenCV which
give you nice interfaces like `imread` to capture images and do stuff with
them. To answer your question you probably need to be more specific. The
performance of what V4L does for a specific ISP/sensor combination very much
depends on those underlying drivers.

If you can afford it, a safe strategy is to leave the sensor in free-running
mode in a separate thread that's dedicated to capturing images. Then in your
application you pick off the most recent frame and do stuff with it in
parallel. For a 30fps camera you therefore have 30ms or so to do your stuff
before the next frame comes in.

------
teleclimber
If linux phones are going to be a thing (pinephone, librem) they'll need great
cameras and excellent image processing software t be even somewhat competitive
with Google Pixels and iPhones and Samsungs.

Given how hard other manufacturers try and fail to make equally good cameras,
it's going to be a steep climb for the open source community too.

So far I am not seeing much work being done in that space. Maybe this is a
start?

I want an open non-spying phone but it's going to be hard to let go of the
amazing camera in my pocket now (Pixel 3a).

~~~
catalogia
> _" If linux phones are going to be a thing (pinephone, librem) they'll need
> great cameras and excellent image processing software t be even somewhat
> competitive with Google Pixels and iPhones and Samsungs."_

Couldn't they just find another niche? I think relatively few consumers know
anything about photography. Even a proper professional DSLR would be useless
to me because I know next to nothing about photography and tools don't make
the artist. No picture I've ever taken has any artistic merit or technical
merit.

Of course, good camera software is a good thing and for the segment of society
that's enthused about photography it's going to be essential. I'm just not
convinced it's the secret sauce to mass appeal.

Edit for response:

> _Well, okay, what problem would you rather see them tackle?_

I don't have such a preference. I expect volunteer developers will work on
matters they find personally compelling. Those that find photography software
compelling should pursue their passion.

~~~
sneak
> _Of course, good camera software is a good thing and for the segment of
> society that 's enthused about photography it's going to be essential. I'm
> just not convinced it's the secret sauce to mass appeal._

We already have the hard data: almost without exception, people who use phones
want their phones to take nice pictures.

For reasoning about camera software, it is entirely reasonable to assume that
“the segment of society that is enthused about photography” is basically the
exact same segment as “people who have phones”.

If you don’t believe me, show me a popular smartphone that doesn’t have a
camera. :)

~~~
catalogia
I can show you a lot of people who rarely if ever use their smartphones to
take pictures, and more still who primarily take pictures for utilitarian
purposes (such as preserving the information on a whiteboard before erasing
it, remembering where they parked their car, etc.) So I definitely don't agree
that _" owns a phone"_ and _" enthused about photography"_ are basically the
same groups of people. The former is much larger than the later.

> If you don’t believe me, show me a popular smartphone that doesn’t have a
> camera. :)

I can show you plenty that lack _good_ cameras... I don't think that's a
compelling argument anyway. iphones have barometers but that doesn't mean
everybody who owns one cares about their altitude or predicting the weather.

~~~
cameronbrown
I really don't know how you could come to this opinion. Instagram/Snapchat are
literally entire social networks built around photos.

You don't need to be enthused about photography to care about preserving
memories either.

From personal experience camera quality is far more important than any other
spec (except maybe the display) for the vast majority of people.

------
omtinez
I'm surprised to see such big emphasis on support for Android. Most camera
modules come with their own drivers that already provide Android support. One
benefit could be the licensing, but after a quick inspection it is unclear to
me what license this library is under -- there is a licenses folder with 4
different licenses in addition to a developer agreement.

The design seems to be heavily inspired by the Android camera API: per-frame
configuration, 3A, multiple stream support, device enumeration, etc.

> The HAL will implement internally features required by Android and missing
> from libcamera, such as JPEG encoding support.

That is interesting, since most camera modules will have a hardware
accelerated path to encode frames directly to JPEG. If it's done internally,
it will be much slower than all other implementations I'm aware of.

------
shmerl
_> the Linux media community has very recently started collaboration with the
industry to develop a camera stack that will be open-source-friendly while
still protecting vendor core IP_

What does that mean? Open source should be open source.

Is it going to work on top of v4l or it's independent?

[https://en.wikipedia.org/wiki/Video4Linux](https://en.wikipedia.org/wiki/Video4Linux)

------
mschuster91
Is this aimed for smartphones or for real cameras? I mean, the Sony Alpha
series do run Linux, even with an (ancient) Android runtime...

------
me551ah
I don't get who is going to be using this library.

Most android developers would prefer to use native Java APIs instead of a C++
library ( with a JNI wrapper around it?) since it's easier. Linux phones
barely have any market share so this library would be used mainly on computers
where most of the cameras are USB anyway.

------
vzaliva
I was not been able to find list of supported cameras. Could anybody please
point to it?

~~~
CharlesW
Found in a recent presentation by Jacopo Mondi of the libcamera team (PDF):
[https://static.sched.com/hosted_files/osseu19/21/libcamera.p...](https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf)

    
    
      - Intel IPU3
      - Rockchip RK3399
    

In progress:

    
    
      - RaspberryPi 3 and 4
      - UVC cameras
      - VIMC test driver

------
inetknght
Interesting. Does it work with Raspberry Pis and the CSI cameras? What about
my Logitech USB camera? Would it support receiving a picture from eg a flatbed
scanner over a parallel port?

~~~
omtinez
> Does it work with Raspberry Pis and the CSI cameras? What about my Logitech
> USB camera?

I don't know details about the Raspberry Pi cameras, but if it's a UVC camera
(quite likely) then it should be supported by this library according to the
documentation. A USB Logitech camera should also be supported. The gotcha is
that unless those cameras expose advanced functionality (3A, multiple stream
support, etc) then you don't get any benefit over using the standard V4L
drivers.

> Would it support receiving a picture from eg a flatbed scanner over a
> parallel port?

I don't think scanners identify themselves as camera devices, but someone can
correct me if I'm wrong.

~~~
mpol
For flatbed scanners you would use something like SANE or Vuescan. Most
devices have a different driver.

------
sandGorgon
This is very interesting. How does this compare with CameraX for which Google
has built an entire building with hundreds of cameras as a testlab ?

[https://www.extremetech.com/computing/301298-camerax-
googles...](https://www.extremetech.com/computing/301298-camerax-googles-new-
weapon-in-the-photography-wars)

[https://developer.android.com/training/camerax](https://developer.android.com/training/camerax)

~~~
lovelearning
This sits much lower in the stack than CameraX. This is more of a HAL for
cameras standardized across multiple OSes.

[http://libcamera.org/docs.html#android-camera-
hal-v3-compati...](http://libcamera.org/docs.html#android-camera-
hal-v3-compatibility)

