
Linux kernel maintainer says no to AMDGPU patch - pero
https://lists.freedesktop.org/archives/dri-devel/2016-December/126516.html
======
indolering
Dave Airlie's followup is pretty great,

> Here's the thing, we want AMD to join the graphics community not hang out
> inside the company in silos. We need to enable FreeSync on Linux, go ask the
> community how would be best to do it, don't shove it inside the driver
> hidden in a special ioctl. Got some new HDMI features that are secret, talk
> to other ppl in the same position and work out a plan for moving forward. At
> the moment there is no engaging with the Linux stack because you aren't
> really using it, as long as you hide behind the abstraction there won't be
> much engagement, and neither side benefits, so why should we merge the code
> if nobody benefits?

> The platform problem/Windows mindset is scary and makes a lot of decisions
> for you, open source doesn't have those restrictions, and I don't accept
> drivers that try and push those development model problems into our
> codebase.

~~~
X86BSD
That's an interesting take from someone who develops a kernel that the rest of
the world has to rip out all sorts of "linux'isms" from code daily and deal
with the pain of porting non portable Linux code to their platform. Jesus the
hypocrisy is deep.

~~~
AsyncAwait
So Linux is not allowed to demand quality code that is following their
conventions and has to be satisfied with the minimum amount of effort for
AMD's Windows code to run? Why should the Linux kernel include an abstraction
layer for AMD's Windows code? Why would any sane person agree to that?

> all sorts of "linux'isms" from code daily and deal with the pain of porting
> non portable Linux code to their platform.

If you're developing for Linux, using Linux specific technology, then of
course there would be porting effort required.

Same as if you want to make you Windows stuff work on Linux, there should be
porting required - after all, it's a different platform,

What AMD wants to do is to _sidestep_ as much of the porting as possible, by
effectively shipping their Windows code inside the Linux kernel.

~~~
wolfgke
> So Linux is not allowed to demand quality code that is following their
> conventions and has to be satisfied with the minimum amount of effort for
> AMD's Windows code to run? Why should the Linux kernel include an
> abstraction layer for AMD's Windows code? Why would any sane person agree to
> that?

Linux is open source, so if the kernel developers desire better designed code,
they are free to change the code up to their quality levels. If the kernel
development team does not have the manpower for this, they should better think
about a way to maintain the kernel that involves less work. One example (among
many) would be to think about a way to keep the internal kernel interfaces
typically stable over many years so that only rarely there is a lot work to be
done for updating all the drivers to the new internal kernel interfaces.

> If you're developing for Linux, using Linux specific technology, then of
> course there would be porting effort required.

The released open source drivers seem to work quite well (as they do on
Windows). The problem is that they don't fit the taste of the kernel
developers.

~~~
quickben
Anything that breaks transactional states and atomicity isn't 'fit the taste',
it's getting fired and we'll get somebody that can do those properly.

If anything, that post highlights the lack of quality among the amd driver
team, and doesn't have to do much with 'taste'.

~~~
makomk
If I'm following the conversation correctly, it doesn't break anything, the
kernel developers don't understand it. Also, the followup from Alex Deucher of
the AMD team is interesting: [https://lists.freedesktop.org/archives/dri-
devel/2016-Decemb...](https://lists.freedesktop.org/archives/dri-
devel/2016-December/126684.html) Basically, he reckons that the atomic
modesetting code is a poorly-thought out and maintained disaster that
regularly breaks multiple drivers - and having somewhat followed the
changelogs, I can entirely believe this. (Dave Airlie responds by blaming AMD
for not sufficiently testing the upstream kernel developers' buggy changes to
their drivers.)

~~~
quickben
You are probably right. But so are the kernel devs.

It seems to be an argument of: we do this for all drivers, vs amd pushing: we
want to be the exception for this driver only.

------
laurentoget
I suspect some bearded guy at AMD is having his 'i told you so' hour of glory
sometime this week.

~~~
frozenport
Quite the opposite, somebody at AMD wanted to play along with the openness of
the Linux kernel. Then to help unify their codes bases they push this - and
get rejected. Somebody at NVIDIA is laughing their ass off, saying it would
have much easier if AMD just did what NVIDIA does!

~~~
joecool1029
Don't see why you're getting downvoted. This is likely the closest to the
truth.

~~~
sangnoir
> This is likely the closest to the truth.

This depends on whether there is at least 1 person at AMD who has experience
with kernel development. If _all_ of the team members all kernel outsiders,
then this might confuse/surprise them. Otherwise someone likely raised this as
a potential issue.

~~~
chei0iaV
AMD had hired kernel developers to work on graphics drivers soon after taking
over ATI.

~~~
dvfjsdhgfv
well, the conclusion is obvious then

------
fowl2
Quite professional, well written and clearly the words of someone who cares
about the product.

Lack of profanities already exceeded the LKML reputation.

I think the point about rules being applied consistently is very true. If
Alice does the work to comply then Bob shouldn't be able to get away without
doing it just because he's bigger.

~~~
bonzini
Reputation or perception are sometimes very different from reality.

------
joeguilmette
Anyone want to explain this in simple terms for us folk not knee-deep in
kernel graphics driver politics?

~~~
theparanoid
Nvidia uses largely the same driver code for both linux and windows in their
proprietary driver (I believe they call it a unified driver).

AMD tried the same in their open source driver and were rejected by the kernel
maintainer. Unified drivers have code sharing advantages but don't follow the
practices of the linux kernel.

~~~
vvanders
My naive assumption is that code reuse across platforms is a good thing, I'd
love to understand why this isn't the case here or what the concrete arguments
are against it.

~~~
AnthonyMouse
> My naive assumption is that code reuse across platforms is a good thing, I'd
> love to understand why this isn't the case here or what the concrete
> arguments are against it.

A driver is inherently platform-specific. It's glue that ties the hardware to
the operating system. The only "correct" way to have one driver work on
multiple operating systems is for the operating systems to all use the same
driver model.

The ugly way is to create your own hardware abstraction layer and then write a
translation layer between that and each operating system, because that's
complicated and hideous.

But it's especially silly because Linux accepts suitable contributed code, so
you could instead use the native Linux model _as_ your "intermediary layer"
and fix Linux if it isn't suitable in some way. And then translate _that_ to
what the closed operating system you can't modify uses.

The result is that the Linux people are happier _and_ you have one less
translation layer to maintain.

~~~
EpicEng
>But it's especially silly because Linux accepts suitable contributed code, so
you could instead use the native Linux model as your "intermediary layer" and
fix Linux if it isn't suitable in some way. And then translate that to what
the closed operating system you can't modify uses.

But Linux repesents a _tiny_ portion of the gaming community, so that approach
would make no sense at all for a GPU vendor. C'mon.

~~~
AnthonyMouse
> But Linux repesents a _tiny_ portion of the gaming community, so that
> approach would make no sense at all for a GPU vendor. C'mon.

The growth market for GPUs is GPGPU and servers. And Linux represents a
_large_ portion of the programming and server communities.

More to the point, as soon as you support Linux at all then it doesn't matter
who has more share, it's still less work to do the above than have to maintain
another translation layer.

~~~
freeone3000
But AMD doesn't. GPGPU is already supported on nvidia drivers with their
opaque blob. AMD has a more-transparent blob. People who want this to work
already have a solution. This kernel change is probably important to some
people, but those who simply want to run a GPGPU cluster on linux already have
workable solutions.

~~~
AnthonyMouse
The GPGPU market is the polar opposite of the gaming market.

Game developers might like to see clean driver source but they don't get to
choose what kind of GPU their customers have already bought. And 99% of gamers
are not going to choose their GPU based on Linux drivers. So nobody has any
leverage and vendors have no incentive to change.

Meanwhile thousands of universities and institutions are each going to be
looking for 25,000 GPUs and they _can_ choose what brand they buy based on
what makes their internal developers happy. Hosts like Amazon and Google are
each going to be buying millions of GPUs, and having better and more
transparent drivers so they can more easily e.g. improve power consumption by
a small percentage, can save them a million dollars/year in electricity.

Someone like Google could come to each vendor and say "first to have mainline
kernel drivers gets all our business" at any point. Or the same result in the
other order; once there are clean drivers third parties are more likely to
make power consumption and performance improvements that give AMD the edge
when the major customers crunch the numbers.

There is a significant competitive advantage in it for AMD to get this right.

------
bentona
What's the practical effect of this? We'll have to wait until someone writes a
reasonable alternative for these GPUs to be supported in Linux?

Edit: Here's the start of the thread
[https://lists.freedesktop.org/archives/dri-
devel/2016-Decemb...](https://lists.freedesktop.org/archives/dri-
devel/2016-December/126422.html)

~~~
zanny
This whole project from AMD was to try to unify their driver infrastructure
between Linux and Windows, because their old FGLRX driver was trash and was a
hacked together mess of Windows bits tied to the Linux system with its own
binary kernel driver.

So they wanted to get rid of the ugly kernel part that was a PITA to install
and update and are now pushing a lot of their hardware abstraction code from
Windows into kernel patches so the AMDGPU driver can just talk to the Windows
blob pretty much verbatim (which is now AMDGPU Pro).

The practical effect is a continuation of the status quo. AMDGPU Pro, without
a lot of this functionality, is either broken or underperforming across all
distros. It is _still_ better than what the last FGLRX was, but nowadays the
Gallium free driver they _also_ develop is beating the blob in almost
everything except the latest driver-level optimized games.

Most distros have completely dropped all proprietary AMD support. Going
forward, it will be up to them to ship a proprietary driver and maintain
installation faculties pretty much everywhere. AMDGPU with Mesa is going to
continue working fine, new GPUs are getting supported still, and a lot of what
this HAL does (display / window management) has had usable support that has
worked for years in various parts of Gallium / Mesa / DRM.

The optimistic future is that AMD drops AMDGPU Pro, refocuses developer effort
on AMDGPU / Gallium, and works with the rest of the Linux graphics community
to implement Freesync / Trueaudio / whatever other tech AMD has buzzwords on
into shared kernel code rather than trying to stick it in a HAL from their
Windows driver.

The pessimistic view has AMD just fire or reassign a lot of its Linux staff,
leaving its hardware on the platform to wilt. It would never stop entirely,
AMD provides programming manuals for their hardware and _most_ of their ASM in
new platforms to enable almost anyone to program their GPUs (unlike Nvidia,
who publishes nothing, requiring devs to reverse engineer their hardware and
ASM) so the support would still be better than Nouveau.

~~~
chao-
I wanted to comment on/respond to both sibling posts to mine (by pjmlp and
witty_username), but you seem quite knowledgeable so I'll ask you in reference
to them:

I have run desktop Linux across a dozen (maybe slightly more) machines over a
decade, and friends will ask me for advice stepping into that world. On
graphics drivers, my safest recommendation has always been:

\- If AMD, use the open source version.

\- If Nvidia, use proprietary.

\- If Intel integrated, thank whatever god you believe in for Mesa.

What about Nvidia's GPUs, community relations or [insert other topic] hobbles
their open source one so thoroughly compared to AMD? Or alternatively: Why is
AMD's (presumably) deeper knowledge of their graphics hardware unable to be
more stable than the open source equivalent when Nvidia's is?

~~~
Nullabillity
> What about Nvidia's GPUs, community relations or [insert other topic]
> hobbles their open source one so thoroughly compared to AMD?

AMD provides public documentation for how the driver should interact with
their GPUs, Nvidia does not.

Oh, and AMD employs some of the people developing the open Radeon and AMDGPU
(non-pro) drivers, while AFAIK Nouveau is a pure community project.

~~~
SXX
No idea if it's still the case, but at least Ben Skeggs and might be one more
developer worked on Nouveau full time for Red Hat.

------
mVChr
For any other layer-7-only arm-chair mechanics who also weren't sure what a
HAL[1] was, I'm here to help.

[1]
[https://en.wikipedia.org/wiki/HAL_(software)](https://en.wikipedia.org/wiki/HAL_\(software\))

~~~
Noughmad
While HAL does stand for Hardware Abstraction Layer in general, the page you
linked is for a specific piece of software that was used by previous versions
of KDE and Gnome for this purpose.

~~~
mVChr
True, good clarification. I had visited the more general page also[1] but
thought the other was more relevant to the email thread.

And definitely much better than my only other inkling for what that meant in
relation to computer systems.[2]

[1]
[https://en.wikipedia.org/wiki/Hardware_abstraction](https://en.wikipedia.org/wiki/Hardware_abstraction)

[2]
[https://en.wikipedia.org/wiki/HAL_9000](https://en.wikipedia.org/wiki/HAL_9000)

------
codys
Summary: HALs in the linux kernel are a not acceptable, even large companies
are held to some standards.

Well, at least for the DRM (Direct Rendering Manager) subsystem.

------
microcolonel
As far as I'm concerned, it would be great to have more sophisticated
modesetting available on my new AMD hardware; but if it means a compromise in
the DRI I won't stand for it. There's no sense in having a review process if a
vendor can largely ignore a review and go silent for three quarters, then ask
for whatever they have to be merged because they'd like it that way.

------
MrQuincle
It seems people think only about games for now, but taking a maintenance
perspective is also wise from the perspective of using GPUs for other
purposes.

One of these is cloud computing on large clusters of headless machines using
the parallelization that GPUs are known for. If you want to do this right you
definitely need input from a lot of sources, not just hacks in AMD delivered
code.

------
ageofwant
This is why the Linux kernel runs and will continue to run the world.

The No men is all that stands between us and the Yes men.

Praise and salutations to the No men, God bless you.

~~~
tw04
That's an interesting take... this is why Linux gaming sucks. I get the
pragmatism but we also need to be realistic that they aren't going to maintain
a completely separate driver for Linux when the Linux gaming market share is
barely a rounding error.

~~~
williadc
> I get the pragmatism but we also need to be realistic

I think what you mean is "I get the idealism but you also need to be
realistic." It's not pragmatic to stand your guns and ask a multi-million
dollar company to change the code they submit to your open-source project.

~~~
omegaworks
Thing is, Linux was never intended to run the latest greatest desktop gaming
hardware. It was intended to be an open architecture where people contribute
from all around the world to make something larger then themselves.

All of these people pretty much contribute their free time to it.

If they can't make basic architectural decisions that improve the worst kinds
of work (driver authorship and maintenance is awful drudgery) how can you
expect them to feel any kind of ownership over their fate?

You're asking unpaid people to do the work people get paid for. Even worse,
when this work just gets dumped on those unpaid people by people who are paid
quite well.

~~~
egil
Most work on Linux is paid[0]. However, asking paid people to do work that
does not align with their employers interest nor the project maintainers, is
on par with what you're asking.

[0] 4.5 Development statistics
[https://lwn.net/Articles/679289/](https://lwn.net/Articles/679289/). Just
google lwn Development statistics for more.

~~~
mixedCase
>However, asking paid people to do work that does not align with their
employers interest nor the project maintainers, is on par with what you're
asking.

Oh but it is in their employers' interest. It's the price of admission for
mainline. And if they want in on mainline, wether simply to harvest PR or to
net a contract that demands a mainlined kernel; they have to pay it. Just
another case of the well-known "cost of doing business".

In the case of AMD I believe they want to reap the benefits of mainline (that
is, not having to support the breakage that comes with being out of tree) and
to be able to compete better with Nvidia; since AMD is unlikely to ever
develop an OpenGL implementation as good as theirs but Nvidia cannot or is
unlikely to be able to open source their driver.

------
leni536
Many people in this thread implicitly suggest that this will have a negative
impact on end users. I'm not so sure, amdgpu still can be distributed
separately to the vanilla kernel.

So this rejection is about maintainership that negatively affects distribution
of the amdgpu module as a side effect. It's nothing that can't be solved by
linux distributions though.

------
theparanoid
Sucks to be an AMD graphics driver developer, right now.

~~~
Kubuxu
That is why in OSS communication is the key. Imagine how much money would be
saved if they have asked maintainers first.

~~~
bsg75
I think they had forewarning for months:
[https://lists.freedesktop.org/archives/dri-
devel/2016-Februa...](https://lists.freedesktop.org/archives/dri-
devel/2016-February/100566.html)

------
mijoharas
Can anyone explain what the acronym DC stands for in these emails? (managed to
get most of the others, but googling DC kernel doesn't turn up much :) )

~~~
tremon
Display core. From a message upthread:

 _We propose to use the Display Core (DC) driver for display support on AMD 's
upcoming GPU (referred to by uGPU in the rest of the doc). In order to avoid a
flag day the plan is to only support uGPU initially and transition to older
ASICs gradually.

The DC component has received extensive testing within AMD for DCE8, 10, and
11 GPUs and is being prepared for uGPU. Support should be better than amdgpu's
current display support._

------
fithisux
The bright side is that they are OSS friendly, not like NVIDIA, and they can
take their code to github for cleanup and further inclusion.

------
UhUhUhUh
I side with the little guy (i.e. < 2.5% market share) who makes my personal
life easier everyday.

------
hobarrera
What are the chances that some lone developers will pick this source up and
patch it up enough to be acceptable inside the kernel?

I mean, it _is_ all GPL, so it's perfectly okay. Is it too much for some dev
in seek of fame to do this?

------
danvet
I commented a summary on the phoronix forums already, copypasting here. I'm
the good cop in the good cop/bad coop game Dave&I have been playing in this.
Maybe this helps clear up some of the confusion and anger here.

This is me talking with my community hat on (not my Intel maintainer hat), and
with that hat on my overall goal is always to build a strong community so that
in the future open source gfx wins everywhere, and everyone can have good
drivers with source-code. Anyway:

\- "Why not merge through staging?" Staging is a ghetto, separate from the
main dri-devel discussions. We've merged a few drivers through staging, it's a
pain, and if your goal is to build a strong cross-vendor community and foster
good collaboration between different teams to share code and bugfixes and
ideas then staging is fail. We've merged about 20 atomic modeset drivers in
the past 2 years, non of them went through staging.

\- "Typing code twice doesn't make sense, why do you reject this?" Agreed, but
there's fundamentally two ways to share code in drivers. One is you add a
masive HAL to abstract away the differences between all the places you want
your driver to run in. The other is that you build a helper library that
programs different parts of your hw, and then you have a (fairly minimal) OS-
specific piece of glue that binds it together in a way that's best for each
OS. Simplifying things of course here, but the big lesson in Linux device
drivers (not just drm) is that HAL is pain, and the little bit of additional
unshared code that the helper library code requires gives you massive
benefits. Upstream doesn't ask AMD to not share code, it's only the specific
code sharing design that DAL/DC implements which isn't good.

\- "Why do you expect perfect code before merging?" We don't, I think compard
to most other parts in the kernel DRM is rather lenient in accepting good
enough code - we know that somewhat bad code today is much more useful than
perfect code 2 years down the road, simply because in 2 years no one gives a
shit about your outdated gpu any more. But the goal is always to make the
community stronger, and like Dave explains in his follow up, merging code that
hurts effective collaboration is likely an overall (for the community, not
individual vendors) loss and not worth it.

\- "Why not fix up post-merge?" Perfectly reasonable plan, and often what we
do. See above for why we tend to except not-yet-perfect code rather often. But
doing that only makes sense when thing will move forward soon&fast, and for
better or worse the DAL team is hidden behind that massive abstraction layer.
And I've seen a lot of these, and if there's not massive pressure to fix up th
problem it tends to get postponed forever since demidlayering a driver or
subsystem is very hard work. We have some midlayer/abstraction layer issues
dating back from the first drm drivers 15 years ago in the drm core, and it
took over 5 years to clean up that mess. For a grand total of about 10k lines
of code. Merging DAL as-is pretty much guarantees it'll never get fixed until
the driver is forked once more.

\- "Why don't you just talk and reach some sort of agreement?" There's lots of
talking going on, it's just that most of it happens in private because things
are complicated, and it's never easy to do such big course correction with big
projects like AMD's DAL/DC efforts.

\- "Why do you open source hippies hate AMD so much?" We don't, everyone wants
to get AMD on board with upstream and be able to treat open-source gfx drivers
as a first class citizen within AMD (stuff like using it to validate and
power-on hardware is what will make the difference between "Linux kinda runs"
and "Linux runs as good or better than any other OS"). But doing things the
open source way is completely different from how companies tend to do things
traditinoally (note: just different, not better or worse!), and if you drag
lots of engineers and teams and managers into upstream the learning experience
tends to be painful for everyone and take years. We'll all get there
eventually, but it's not going to happen in a few days. It's just unfortunate
that things are a bit ugly while that's going on, but looking at any other
company that tries to do large-scale open-source efforts, especially hw teams,
it's the same story, e.g. see what IBM is trying to pull off with open power.

Hope that sheds some more light onto all this and calms everyone down ;-)

------
partycoder
I wonder why this feedback came just now and not earlier.

It will be time consuming to change now.

~~~
pero
It seems like they might have somehow ignored direction from as early as
February...

[https://lists.freedesktop.org/archives/dri-
devel/2016-Februa...](https://lists.freedesktop.org/archives/dri-
devel/2016-February/100566.html)

~~~
aidenn0
Even more clear:

> Cleaning up that is not enough, abstracting kernel API like kmalloc or i2c,
> or similar, is a no go. If the current drm infrastructure does not suit your
> need then you need to work on improving it to suit your need. You can not
> develop a whole drm layer inside your code and expect to upstream it.

[https://lists.freedesktop.org/archives/dri-
devel/2016-Februa...](https://lists.freedesktop.org/archives/dri-
devel/2016-February/100729.html)

~~~
wolfgke
> Cleaning up that is not enough, abstracting kernel API like kmalloc or i2c,
> or similar, is a no go. If the current drm infrastructure does not suit your
> need then you need to work on improving it to suit your need. You can not
> develop a whole drm layer inside your code and expect to upstream it.

But abstracting all this _is_ fitting the infrastructure to _their_ needs
(which is having a common driver infrastructure for _many_ operating systems).

~~~
pas
Sure, but then why not just continue with the binary blob?

Kernel development is largely an optimization process of the internal model of
what a kernel should do, and for that developers need first-class raw data,
the DRM maintainer wants to know how you'd like to change DRM. And though you
can put that into a translation/abstraction layer, it's not helpful, because
that doesn't scale, because the maintainer would have to look at every such
layers and come up with a common one, and repeat this grueling task every time
they want to move forard with DRM itself.

------
nice_byte
honestly, i'd rather have a functional but proprietary driver than have
nothing at all. this is wrong.

~~~
farresito
They were advised months ago that this could happen and they kept going with
that. It might suck as a user, but you have to respect whichever standards the
Linux kernel asks for.

~~~
m-p-3
Binary blob it is I suppose..

~~~
tracker1
a 100kloc hal shim for a single vendor is unacceptable. I don't blame them...
If they'd presented this as a joint approach with nvidia and intel, maybe, but
that is not the case... Why not make a HAL for Windows/osx and use a linux
mode as the base?

In the end, it kind of sucks, but was the right thing to do.

~~~
izacus
I wonder how that proposal would go inside a corporation - "we need to rewrite
our driver at huge development costs instead of further competing with nVidia
because we need to make OS developers of marginal market importance happy".

~~~
dragandj
One of the problems for AMD is that everybody uses nVidia for computing, and
nobody uses AMD. One of the reasons is thaf they have been ignoring non-
mainstream GPU market. Well, those non-mainstream markets become mainstream
and if you fall behind it is almost impossible to catch up. I personally
prefer AMD's GPUs for computing and prefer OpenCL to CUDA. But, they are
minor, since nvidia has much better software offering. AMD absolutely need to
offer superb Linux story if they want to get people to use their hardware for
computing instead of nvidia. They need Linux more than Linux needs them.

~~~
anoother
This. 100 times this.

AMD used to have much a much better hardware story for compute. I don't know
how it looks now, but nVidia have absolutely stolen the market due, in part,
to their excellent software -- even on Linux.

My only experience of compute on AMD was a FirePro v7900 -- an expensive,
workstation-class card. With both the latest, and the 'workstation-class'
Catalyst Linux drivers, my LuxMark tests came out very vast, but very red.

With nVidia, I can simply add a repo to my Ubuntu machine and have the latest
stable drivers every time I do a dist-upgrade. If I want a solid, tested CUDA
dev environment, I can install the CUDA repo and do likewise.

AMD have to make sure the end-user experience with these cards is as smooth as
that, _and that everything works_.

I really hope AMDGPU-PRO is that experience. I haven't tried it yet, so I
can't comment.

It's easy to dismiss enthusiasts / hobbyists / developers / gamers as a 'small
market'. There was no 'pro gamer' market until ~10 years ago. Now there are
entire companies built off the back of it. AMD cannot continue to leave a sour
taste in end-users' mouths, otherwise there might not be any left soon.

~~~
izacus
And part of that is because Linux nVidia drivers are on-par quality wise with
Windows. Because it's the same codebase and they develop it lockstep.

AMD it seems won't be able to do that now.

~~~
ayrx
> AMD it seems won't be able to do that now.

Yes they can, they'll just have to commit resources to keeping up with kernel
changes instead of having it done for them upstream. You can't have your cake
and eat it as well.

~~~
izacus
nVidia has it. All they did is close off the drivers completely and not listen
to kernel devs. Seems to be working out for them very well.

Is that the lesson we want to give to companies?

~~~
prodigal_erik
If you want the kernel devs to maintain your code, it has to be maintainable.
If not, you have to fix it yourself every time the kernel changes. That's the
cost NVIDIA pays for their abomination.

