Hacker News new | past | comments | ask | show | jobs | submit login
The growing image-processor unpleasantness (lwn.net)
191 points by pabs3 on Aug 19, 2022 | hide | past | favorite | 96 comments



FYI, these image processors often also do autofocus. Sometimes, they don't even expose an API to the operating system to do focussing, so you can't even manually do focus in software if you want to - the best you can do is say which area of the image should be focussed, and the algorithm in the image processor will try to achieve it.

Thats suboptimal, because there are lots of bits of research into better ways to do focus. Todays image processors typically just adjust back and forth till they find a frame with more high frequency components. It kinda works, but requires them to overshoot a bit and go back, and doesn't work well for things like mirrors or windows where they may focus on the window rather than whats behind it or vice versa. They also have to adjust the focus slowly because they need to get a new frame and process it to see that the focus needs adjusting further - despite the fact the coils in the camera can move the lenses in milliseconds if commanded to.

Future image focus techniques do things like run machine learning over the image, and say "thats a car, and it looks about 25 yards away, so set focus straight to 25 yards".


I'm sure these image processor manufacturers wouldn't mind if they could take their old hardware, implement a better autofocus algorithm, and sell you the same hardware but with the new algorithm as a new product which you have to buy.


Who buys a new internal webcam for their laptop, though?


Wrong market. The question is "who buys a new phone because it has a better camera"?


I'm sure there are people who buy a new laptop because it has a better webcam.


So that is why my phone seems incapable of focusing on flowers, insects, and small birds. They take up too little of the frame! It seems like when the camera does its autofocus, it works for a second then chooses the vegetation in the background.


Trick for cheapo phones (especially China made Mediatek chipsets):

Point your phone at your hand, the same distance away as the flowers. Focus it on your hand. Lock the focus (and exposure and white balance - it seems you can never lock one without the other). Now point the phone at the flowers and take the photo.

Expensive phones normally support camera manual focus if you install the right app.


Or, heaven forbid, allow the uset to control the focus themselves.


Reading between the lines here, the situation seems to be a little like the softmodem situation, which was also a major pain for Linux support. The abstraction previously given in hardware has been split into software/hardware to save on hardware costs, but now they refuse to open source the software.

As for the provided user-space software, it's not clear why this is happening. As long as kernel modules keep to the allowed interfaces, (and not the GPL-only internal interfaces) they can be closed source. Is it a case of too much caution from the vendors (after all -- NVidia have closed source drivers)? Is this supposed to be more forward compatible? Is it possible to have a higher performance (zero copy?) user-space exposed abstraction for this kind of video processing?


It exactly reminded me of softmodems, except just maybe here there's room for innovation. Apparently getting the raw sensor data allows you to do clever AI-based enhancements to the image. A bit different from softmodems because there you were always limited by what the other end of the phone line (ie. your ISP) would accept so you couldn't invent a better encoding system.


These are the inverse of soft modems though. Those were just a PHY layer with the software driver doing all the DSP work. Softmodems removed DSP hardware. Modern image sensors have more smarts than just a CMOS sensor.


quick add of onboard Unmanned Aerial Vehicle cameras and sensors, in the other direction. Being without precedent, there is no pattern of evolution in the consumer space as with soft-modems, but.. next-gen UAVs can benefit from onboard filters (and more?) however, the temptation to make proprietary, definitely black-box, processing chains is huge. Precisely for the purpose of manipulating the output or censoring the output, it turns out. Hard to deal with in the markets since first-mover and contract-wielding advantages, are not fake. unlike the photos and video being generated?


"Just this week I have a friend who bought a brand new Alder Lake laptop who is desperately trying to install the latest available kernel and firmware seeing the HDMI port does not work, nor does the laptop go to sleep properly."

Good to see that the good old "laptop doesn't go to sleep on linux" problem has never truly gone away.


To be fair to Linux, sleep doesn't work reliably on Windows either. The new Intel sleep states just don't seem to be reliable enough with modern software, to the point of at least one laptop manufacturer recommending customers to not carry a laptop in sleep mode in a backpack.

The HDMI issue is ever present. Intel has lately been dropping the ball significantly with their drivers, but at least it's not Nvidia this time, so that's a change!


I mean, yes, of course - I'm just making (a little bit of) fun out of the fact that "laptop sleeping reliably" has traditionally been a benchmark for how complete Linux support is for that specific laptop.

But yes, I've had 4 Dell XPSes, all running windows, and all of them have had at least one episode of the laptop cooking itself inside a bag - I pull it out and it's literally burning to the touch, because some stupid event decided to wake it up, despite the lid being shut.


I've had to switch to linux full-time on my latitude for precisely this reason. It sleeps perfectly and the battery life is a lot better on linux than on windows.

The good thing about Dell is that they ship with linux on certain configurations, so linux installs are very robust with the caveat that you will not see the agressive fan curves or higher tdp limits that you can get on windows.


Even on freshly imaged Microsoft Surface hardware, I have problems with Windows going to "sleep" (since it's connected standby, not sleep). About 25% of the time the IR webcam will get stuck always on, preventing the Windows Hello face-based unlocking to work and resulting in the IR blaster heating significantly. Reproducible even on an image with all the latest MS updates.


> at least one laptop manufacturer recommending customers to not carry a laptop in sleep mode in a backpack.

And it wasn’t even just a recommendation, they declared it would void your warranty to do so! (In at least Australia, such an attempt to disclaim liability is illegal.)


Yeah my 6 year old i7 laptop just shuts down completely when told to sleep, both in Windows and Ubuntu (always has since I bought it). It's definitely some kind of firmware thing that Intel is royally messing up on a grand scale.


> Good to see that the good old "laptop doesn't go to sleep on linux" problem has never truly gone away.

FWIW, I have a spare Windows laptop, just over a year old, whose WiFi does not recover upon waking from sleep about eleven times out of a dozen.

OTOH, I have had both Windows and Linux laptops which have gotten uncomfortably warm when ‘asleep.’ And right now I have a Linux laptop which very rarely hangs on sleep and turns into space heater, not even turning off the monitor but becoming completely unresponsive.

Sleep is apparently a lot trickier a feature than it looks.


Blame Intel, their power management code's an ever changing mess.


I have 2 computers with AMD, sleep glitches on both.

On desktop, sometimes there's no video signal after wake up.

The laptop doesn't support sleep at all, it's either connected standby = not sleeping, or hybernate. Windows says none of the proper S1-S3 states are supported by the computer.


So it only works reliably on Macs?


No, it works reliably on many individual PCs as well. Comparing the entire PC ecosystem to a couple models made by a single company is a category error.


There are problems with Macs too as recently as the last release where the laptop would have a bit set preventing it from sleeping that you could only see from the terminal.


My desktop computer doesn't sleep properly thanks to the Nvidia driver. All my computers worked fine for years and I definitely should have done more research before "upgrading".


> Senozhatsky posted a pointer to a repository containing a CAM implementation, but that code is only enlightening in the vaguest terms. There is no documentation and no drivers actually implementing this API. It is highly abstracted; as Sakari Ailus put it: "I wouldn't have guessed this is an API for cameras if I hadn't been told so".

He's not kidding. It's hilarious. It could be a driver for anything.

https://chromium-review.googlesource.com/c/chromiumos/third_...


I thought it was a bit hyperbolic, but yeah, looking at headers (both userspace and kernel), I can't see anything remotely related to camera. I would have at least expected a "color"space somewhere (colorspace which could have included jpeg/heif/... as format)


Owning an XPS 13 2in1 from 2021, I get the impression that this issue started way before IPU6. At least my webcam never showed any signs of life, and the only thing I found about this is an ancient git repo [1] where other frustrated device owners shout their anger into the wind.

I might be misinterpreting this, I'm not at all a uardware/driver person, if that's even a thing. Can anyone confirm this?

[1]: https://github.com/intel/intel-camera-drivers/tree/master/dr...


Yes, the webcam situation in Linux has been degenerating for years by now, as many laptop models and specially everything that is an ultraportable or convertible has switched to IPU-based webcams. I have multiple laptops from 5 years ago where the webcams don't work and will likely never be able to work under Linux.

Basically the main difference is that with IPU-based stuff the kernel must support not just a "webcam"-like device, but rather must contain support for all the individual components making up the webcam. Think not only actual raw sensor hardware, but even stuff such as controlling the shutter and voice coil motors directly.

The situation is similar if not identical to a smartphone where the OS interfaces directly with the raw camera hardware instead of like in traditional computers where it was all abstracted in the form of a USB UVC webcam. In fact, some of the newer sensor drivers are adapted/stolen from Android kernels.

Many, many laptop models come with IPU3 hardware, which is around 7 years old hardware at this point, and it is still mostly unsupported in Linux. In fact, practically _all_ the support in current Linux comes from the efforts of one guy who has been trying to make the Surface Pro line of devices work under Linux ( https://github.com/linux-surface/linux-surface/wiki/Camera-S... ). I have had some success with these patches (which are now mostly upstream), by manually writing/converting from Android kernels/HALs the required sensor drivers in my laptop, but the image quality was just garbage. I don't even know how much image processing is done software-side on Windows.

Intel has published code, but not bothered to upstream it; Google, who is using this hardware in some ChromeOS devices, only cares about ChromeOS support which does things just too differently to Windows devices (e.g. basic things like enumerating which camera hardware is connected). None of the big OEMs is behaving nicely here.

IPU4 appeared more recently and is basically entirely unsupported. There is zero upstream support and don't bother with the out-of-tree driver since it will be a waste of time (you also need the drivers for the specific hardware sensors used in your laptop + the "pipeline configuration").

IPU6 is probably just a newer iteration of this. I don't know if it will have a bit more developer/manufacturer oomph and thus more Linux support than previous iterations, but even if it does, it's highly likely that all existing laptops with IPU hardware will never have a working webcam in Linux.


Thanks for the explanation/opinions! Sadly this confirms my very "grasping in the dark" explorations of this.

It reminds me of a comment [1] I left a while back, and that I randomly found again today:

> Even though significant work is required to keep [old machines like 2012 thinkpads] working, some people prefer to do so. Crucially, they prefer it not because they love tinkering, but simply because the value propositions of 2021/2022 hardware doesn't look better.

I guess the value proposition of modern notebooks is actually degrading, at least in this respect...

[1]: https://news.ycombinator.com/item?id=29872420


> no SoC vendor is willing today to open their imaging algorithms.

I know at least in rPI land this is not a matter of unwillingness but inability: the GPU boot blobs are protected under copyright law by upstream licenses that do not permit distribution of the source (which is not f/oss anyway).

It’s like three vendors removed (GPU firmware author, SoC maker, board maker). I imagine the ISP situation is somewhat similar; Intel probably has licensed some of this code versus making all of it from scratch.

The firmware industry is like an alternate reality where f/oss never happened, it seems. People guard their codez as jealously as the underground blackhat scene.

Code libraries for implementing stuff in hardware (on FPGAs) is literally referred to as “IP”. The mind reels. Totally different culture over there.


in the case of image processing it was "we paid a lot of money for it so you cant have it without drm"

https://forums.raspberrypi.com/viewtopic.php?p=16989#p16951

>The GPU contains a very sophisticated ISP - the pipeline of image processing to get some raw camera data to pretty JPG's. This pipeline also needs to be 'tuned' to the particular camera and this can take some times (months for a specialised team to do a really good job) >All the work above needs to be done for each new camera module - and will be done for any camera sold for the Raspi.

Raspi cameras ship with drm https://gist.github.com/marcan/6dde73a9a0c917cd4fc9784a0a73e...


libcamera also supports the rpi and moves all of that control loop stuff into linux userland, so you would think there are no secrets left

but the actual register writes to drive the ISP are still in the blob, and libcamera is just telling the blob how to tweak every knob in the hardware

it makes no sense why they have opened it up so much, yet still insist on keeping that last bit a secret


Don’t discount the nature of the business, a lot of the low level code may never get updated, or even be capable of updates, it’s burned into ROMs and the only thing that might keep their business afloat is the fact they squeezed a bit better performance out of otherwise common parts that one big factory is using to make webcams or other widgets for dozens of indivisible companies selling their own products. It’s unfortunate that they fear competition so greatly they refuse to even document their external interfaces because that might make it slightly easier for someone to work out how their system works, but I don’t fault their decision making.


I see image processors as a stopgap measure that won't exist for long.

Just like GPU's were first dedicated hardware that could only render triangles, but are now fully fledged parallel compute machines which, if asked to, can also render triangles.

Image processors will be the same - and eventually the parallel compute abilities will be merged into a GPU.

They'll be replaced with GPU's that can directly input the gigabits of data that comes from a CCD sensor straight into the GPU RAM, ready to have noise reduction, demosaicing, de-shake, stacking, de-warping, focus, white balance, etc. algorithms applied to it.


Unfortunately, I think we are heading in the opposite direction.

The Moore’s Law/Denard Scaling disconnect has created a growing need for dark silicon [1] on chips. Thus we are likely to see even more specialized components rather than fewer. The Apple M1/M2 is a perfect example.

[1] https://en.wikipedia.org/wiki/Dark_silicon


Yes, but it makes much more sense to specialize the components by form than by function.

And GPUs and image processors look a lot alike. The largest difference today is on the IO links, but it makes too much sense to link the camera directly to the DRM.


The theory is that for every node shrink, a greater percentage of your transistors have to be dark.

If you use a component too frequently, it isn't dark enough.

I'll trust your statement that GPUs and image processors look a lot alike (I am no expert by any means), but as long as there is a difference an image processor is going to be a candidate for inclusion on the chip. The designers have a transistor budget, thermal constraints, and a prioritized list of components they can put on the chip given available transistors. I'm sure that if they had better alternatives, they'd skip the image processor. But they didn't, which gives you some information on how Intel thinks about image processors.


It seems to me that what the FOSS world needs in order to succeed is an extreme push towards forming a class of programmers specialized in reverse engineering. We already have plenty of capable coders who have written so much useful software, but does is matter if we can't use all the hardware at our disposal? Our devices are filled to the brim with accelerators, sensors and assorted specialized hardware that we could be taking advantage of, but they are all buried behind proprietary drivers or just plain inaccessible firmware.


What a waste of human potential that would be.

Don't get me wrong those peoe are heroes. They're literally sacrificing their life time for a good cause. But like with many such many hero stories, some bureaucrat could have resolved the issue with the stroke of a pen.

What we need is an extreme push against a culture of secrecy and disempowerement of device owners.


You’re wrong about it being a waste of potential imo.

It’s real world training for reverse engineers. Having worked in that field myself I can tell you I was never surrounded by so many smart, talented and exceptional people as when I worked with other reverse engineers (especially in the video game world).

RE is a necessity, and it paves the way for some incredible talent to shine and practice.


I think the argument is that in a better world where everything came with open specs we wouldn't need reverse engineers and those smart people could spend their time on something else. It's an argument pointing out where we live away from the global optimum.


Yes and my argument is that we might lose out on the skills being developed by fantastic reverse engineers :)

A global optimum would include some other way for these legends in the making to train.


They'd be solving other low level problems instead I assume. Systems that are unintentional blackboxes instead of intentional ones.


That is like saying it would be a waste give aid to refugees in war zones because we shouldn't have wars in the first place. There will always be push back against open hardware as long as it is profitable.


Not having a war is much harder than approving the release of a bunch of documentation PDFs that you have already.


What is done is done. Even if some piece of legislation were to come out tomorrow all the older devices would still be completely closed off. Would we have to collectively just push to move to the new open-by-default devices or should we persist in reverse engineering the older stuff?


That's not how laws work.


Are all laws retroactive? The EU has now mandated the use of USB-C, does that mean I can bring my iPhone 4 to Apple and have them solder the new port?


Laws can say you cannot sell any more X unless it complies with Y. That's not a retroactive law.


I don't think you've read my reply correctly. If a law were to come out that said "you can't sell any more X unless it complies with Y" we would still have billions of devices that don't comply with Y. So even if a law about open hardware/firmware were to pass in the next few years, we would still need a huge reverse engineering effort to open up all the older devices. Unless you want to argue that, yes, we should forget about those older devices and all jump on the new compliant platform, I think pushing hard for good reverse engineers is a very implortant priority


> So even if a law about open hardware/firmware were to pass in the next few years, we would still need a huge reverse engineering effort to open up all the older devices.

Not necessarily. Impose a tax on all products where the manufacturers do not release all documentation even for past devices, that should serve as a pretty decent incentive.

Yes, legally it's a questionably grey area, but to be honest it's time to actually use our market power.


Some laws are retroactive, some aren't. And, anyway, "you must publish the specs of any hardware that you keep selling" isn't retroactive in any way, but would solve most of the historic problem.


Not all are, but they can be. There's typically a bias (can't remember the name, but it's a known concept) to not change the legality of past actions. But there have been and could be situations where previously-legal actions were retroactively outlawed and punished. It comes down to a decision per situation: what is a higher good: the individual's ability to rely on my current action to be legal in the future if it is legal now, or rather the ability of society to deter from things even though the laws haven't been updated yet. Prominent examples of this (sorry for the Godwin spin here) are punishments of Nazis post-war, for actions that were technically legal during the Nazi years. The argument being "being able to rely on the law not being applied retroactively is not a higher good that stopping people from doing clearly immoral things."


Nazi warcrimes were punished retroactively for a number or reasons, political and not. One of the most important ones was that what the Nazis did was blatanly wrong. You can't seriously claim that killing millions of people can be justified simply because there isn't a law about it, it's something so intrinsically evil that you can't rely on pure formality.

Arguably, refusing to publish literally every single document pertaining to proprietary hardware is not on the same level of obivous malpractice as a genocide, so I think you could have proposed a milder example to argue your point.


Fair point, not sure why you're being downvoted...

I would never dream to equate proprietary documentation to Nazi war crimes. That's without question. I just used this because, like so often, in questions about these war crimes, there is no real argument to be made for "the other side" so it clearly shows the line of reasoning for breaking with a legal convention in specific cases. You said it yourself: "it's something so intrinsically evil that you can't rely on pure formality." Which is exactly the point. Protecting the perpetrator just because there's also a value in relying on their actions being legal at the time just doesn't outweigh the cost of letting them get away with such monstrosities.

A more recent and less evil example would actually be the Cum-Ex scandal. One question was whether money would have to be given back, given that what happened might be technically-legal but blatantly, and expressly against the spirit/intention of the law. But that whole thing is still being fought out so the conclusion is less clear cut.


Completely agree. I took matters into my own hands with my laptop. Wasn't able to reverse engineer everything but I got many features working on Linux and the result was better than the proprietary Windows app.

I wish I was knowledgeable enough to tackle more complex hardware like webcams. Perhaps one day I will be.


> I wish I was knowledgeable enough to tackle more complex hardware like webcams. Perhaps one day I will be.

https://lkml.org/lkml/2020/9/17/363

People who started the job on Surface webcam hardware didn't even know C to begin with ;)


Yeah that's impressive... I have to step up my game then. :)

I was able to intercept and reverse engineer much of my laptop's USB features. I hit a wall when I tried to figure out the ACPI stuff. There's some fan control functionality in the Windows app that's missing from my software but the fans are terrible and always at 100% anyway so at some point I decided to just let it be.


What about if we just form a class of programmers that exclusively work with Linux and buy Linux compatible laptops? That would be my preference.


What's the point of that? One of the benefit of having a FOSS platform is that you can keep supporting and updating hardware and software that in the normal proprietary economy would have been abandoned a mere few years after entering the market. If instead of pushing for reverse engineering we simply made more and more people buy Linux-compatible devices (which still doesn't mean they'd run FOSS) we would have a humongous amount of perfectly fine hardware locked down and unable to be properly used.


That's martyrdom and it isn't even for a good cause IMO. If someone buys Windows-only hardware that loses support after a few years, that's on them.


> If someone buys Windows-only hardware that loses support after a few years, that's on them.

Unfortunately, the FOSS world isn't always able to cater to professional and prosumer crowds.

If it is absolutely critical to my work that a piece of hardware has feature X, then I'm going to buy it regardless.

I'd choose a FOSS-compatible solution if it were available, but this hasn't been the case many times in my career. (Yes, I can provide concrete examples if necessary.)


I think if say 15% of buyers demand Linux support, then it can become a standard feature of most sold equipment. Just like Windows support.


> Just like Windows support.

Completely non-existent? Either a laptop comes with windows or you get zero support. See every chromebook ever.


If you're reverse engineering you've already lost. Long-term it's much cheaper to buy FOSS-friendly hardware (AMD+AMD).


You are absolutely correct. We need an organisation that channels lots of money into hardware and software reverse engineering. This is of course how Linux started (without the funding); developers doing reverse engineering in their spare time on the hardware that they had and wanted to use.


Judging by Asahi, FOSS will be fine.

The real question is where the hell is the open source hardware?

Engineers working for NVIDIA, or camera IP core specialists are not gods. Let's just make our own god damn camera and GPU.


On that note, I recently read that Asahi now has pretty much a finished driver for the M1 GPU. How did they pull that off that quickly? Was there extensive documentation available?

Thanks to nouveau being mediocre at best and lagging 2 or 3 generations behind until it has usable support for a GPU, I always assumed writing a decent GPU driver through reverse engineering must be next to impossible.

So what's up with Asahi? Were there just some gifted Wunderkind working on it? Is the architecture so much simpler? Is there more information available? Iirc it's based on powerVR, so not completely unknown, but then again the nouveau team got some hints from Nvidia in the past, and also they have been working on it for a really long time in comparison. Just really curious.


> I recently read that Asahi now has pretty much a finished driver for the M1 GPU

Where have you read that? According to Asahi Lina's tweet from a few days ago, they are just starting with it:

https://nitter.net/LinaAsahi/status/1559408257965309952


Hmm, I can't seem to find it, but it was some article maybe half a year ago, where in the end they managed to render some model of a rabbit with shaders and all. Am I remembering something else here?


> Am I remembering something else here?

No. That has indeed happened. It was a work by Alyssa Rosenzweig [0].

If I understand correctly, she implemented the userspace Mesa driver. The kernel driver is being developed by Asahi Lina. Together, they are meant to provide "full open source graphics stack for 3D acceleration on Asahi Linux" [1].

[0] https://rosenzweig.io/blog/asahi-gpu-part-5.html

[1] https://rosenzweig.io/blog/asahi-gpu-part-6.html

Edit: make the quote more accurate, adjust formatting


https://news.ycombinator.com/item?id=25873887

(“Dissecting the Apple M1 GPU, Part II”)


nouveau has the problem that they rely on nvidia signed firmware blobs for some functionality (like reclocking), and if nvidia don't want to give them those blobs, then they can't use the features the blobs enable. With nvidia's Linux kernel driver going open (but firmware and userspace staying proprietary), the firmware will be released separately and be redistributable, so nouveau will be able to use it.


> if nvidia don't want to give them those blobs, then they can't use the features the blobs enable

Nvidia is giving these blobs to Windows users who download their drivers, so why can't they write a script/etc that extracts blobs from a Windows driver binary and let the user download that binary manually?


They are giving the blobs to Linux proprietary driver users too. I'm not sure why that isn't viable, but they definitely did it for very old unsigned versions of the firmware on old GPUs. I guess they got tired of the upgrade treadmill if they aren't doing it any more, but probably they have such a script for their own reverse engineering efforts but don't want to deal with the support issues from when users try the script against newer driver downloads etc.


> Judging by Asahi, FOSS will be fine.

Or rather: based on the results of other contemporary complex hardware RE efforts, Asahi will never be relevant. Think of _how many GPU hardware_ there have been efforts to RE, and how few have produced any drivers that people would be happy to use on a daily basis. How many people use Nouveau these days? PowerVR SGX (a "GNU high priority project" from a decade ago!)? Lima? Even Panfrost?

In fact, this entire IPU webcam thing is not new. We've had hardware with IPU for at least half a decade and literally _NO_ such device works with Linux, only a couple devices half-work, and very poorly at that.

There are much fewer people working on hardware support on Linux that you may think.


> The real question is where the hell is the open source hardware?

Agreed. I get so angry every time I think about the amount of Verilog and VHDL that will never, ever get released even after the companies responsible for it die and even after many years have elapsed.

Where is the 3Dfx VSA-100 source, for instance? What technical advantage could it give NVIDIA now? They purchased 3Dfx twenty years ago!


They don't necessarily own the rights to all the source. You can't release it if you licensed part of it from some other company and that company is now dead.


The problem is that fabricating a chip has upfront costs measured in 10s of millions for the masks, and the chips that come out aren't all uniform -- there is more work required to do to capture variations in power consumption, rejecting bad parts, etc in order to bin them to target different price points. It's hard to do at a reasonable cost per chip without huge volume.


Is there public documentation on these "image processors" that is sufficiently detailed to implement drivers/firmware for them ?

It seems to be that making such documentation available should be the core demand on hardware manufacturers, not that they necessarily open source their own software/firmware.


> ...making such documentation available should be the core demand... not that they necessarily open source their own software...

That would work only if companies could be compelled to release complete documentation. E.g., Microsoft even corrupted[1][2] the ISO standardization process to force through standardization of their OOXML (to take the wind out of worldwide efforts around standardization on the existing ISO standard, ODF) even though the OOXML Microsoft shipped _after_ standardization could not be implemented from the specification in the standard [2].

Forcing companies to provide source for a fully functional driver under a suitable license would prevent this sort of abuse from bad actors.

[1] http://www.groklaw.net/article.php?story=20070312083134403

[2] https://www.garshol.priv.no/blog/154.html


I think it's fair to expect hardware companies to publish sufficient documentation so that independent software developers can write code to make use of their HW. Preferably that would be done through market pressure, but may at times need government action along antitrust lines.

However if the nature of a device is such that a driver for it requires the implementation of non-trivial algorithms then I wouldn't support government action to force companies to open source such code.

I think standardization processes are a rather different topic than hardware, but I'm wondering, did the Microsoft shenanigans you mentioned actually prevent people from developing compatible software ? My impression is that the compatibility issues that people had with MS file formats in the 90's have largely been resolved in practice. Is that not the case ?


Yes, standardization processes are different than hardware, but I expected that the requirements for a full and accurate specification would be stronger for an international standard than random hardware documentation-- and, the MS example showed that even a standards document could be gamed to such an extent as to be unusable to create independent implementations.

I don't follow the MS world, but I do recall complaints that OOXML as built by MS included binary blobs (straight out of their old, undocumented, doc and xls formats) wrapped in XML. And, folks using MS software at old job sometimes complained that when I edited an MS document in libre office, "I broke it." Thankfully my group standardized on plain text to make version control more useful, so it only came up when dealing with folks outside the group.


Does anyone know why the image processing code needs to remain proprietary? Is it violating patents or something?


Patents is often a concern. Maybe you think you're not violating any patents. But somebody with access to your code could find something that looks like a violation. Real or not, it can spell trouble for you.

Your code delivery could also contain parts that are licensed from other companies, limiting your options with regards to licensing.

One way to limit the problem is to put most of your "secret sauce" code onto one or more sub-processors embedded in the imaging hardware. The register bank of your hardware would only be accessible to these sub-processors, meaning that you don't have to publish your register definitions. Then you can deliver the sub-processor microcode as a binary blob (independent of the host OS) and let the kernel module itself be open-source. This approach is taken by at least some vendors.


I worked on a embedded hardware application using Linux and "cameras".

We (like many people) were using sensors from OmniVision[0] and there was a dedicated contractor who's entire area of focus (and life's work) was getting photons from the sensor of the camera into something usable by software. In our case that was an unholy combination of vendor software, custom code derived from very obscure NDA-only datasheets, and a bunch of custom plugins for gstreamer that could actually make use of the sensor, interfaces, and hardware encoding silicon.

It was one of the more eye-opening "WOW, so that's how the sausage is made" experiences of my life.

[0] - https://www.ovt.com/


The "3A" algorithms - Auto-focus, auto-exposure and auto-white balance are generally considered "secret sauce" by the various camera companies and thus closely guarded. The calibration process which is required to tune an ISP's pipeline is also very subjective and tied to the end use case. Having been involved with this area for the past couple of years I do believe that the openness of the raspberry pi platform (specifically with regards to the camera) is nudging the industry in the right direction.


It probably doesn't need to be in all cases, and for simple stuff like generic image processing (resizing/rotation), there's probably no "secret sauce" to worry about.

You could maybe technically argue things like denoising algorithms / foreground/background detection might be more secret as there might be competitive advantages there somewhere...


There has been a lot of progress in cameras for smartphones, for very good reasons. And I mean a lot, see the physical design of a modern smartphone lens for example. I would guess that laptop cameras got hit with extra protection because they're "too close" to what's considered core intellectual property for a camera-maker. But it's just a guess really.


This url is a "free link" to subscriber-only LWN content, and shouldn't have been published on HN. Quoting from the MakeLink page:

The "subscriber link" mechanism allows an LWN.net subscriber to generate a special URL for a subscription-only article. That URL can then be given to others, who will be able to access the article regardless of whether they are subscribed. This feature is made available as a service to LWN subscribers, and in the hope that they will use it to spread the word about their favorite LWN articles.

If this feature is abused, it will hurt LWN's subscription revenues and defeat the whole point. Subscriber links may go away if that comes about.



HN is definitely on the edge around that occasional though! Some weeks there have been 4+ subscriber links here


Ah well...


Honestly in this case, it makes it more likely for me to subscribe. I wasn't aware of the quality of lwn articles before I started seeing them regularly on the HN frontpage, and I wouldn't have even thought about subscribing to a news source centered mostly around Linux before.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: