Hacker News new | past | comments | ask | show | jobs | submit login
Remembering Larry Finger, who made Linux wireless work (arstechnica.com)
764 points by bookofjoe 10 months ago | hide | past | favorite | 86 comments



Wow, this is sad news indeed for the world at large, but also me personally. I'm suddenly in deep regret for having procrastinated reaching out the thank him. He doesn't know this, but he was somewhat of a mentor for me.

In the early 00s I bought a laptop that had an RTL 8188 CE card in it that ran awful under Linux. I forked his driver and made a number of changes to it and eventually got my wifi working really well (by doing things that could never be upstreamed due to legal/regulatory restrictions). Over the years I rebased often and reviewed the changes, and learned a lot from watching his code. It took a bit of getting used to, but a certain amount of beauty and appreciation emerged. One thing was very clear: This man was doing a lot of the work to keep the ship together. Even just keeping the driver compilable with each kernel release often took some non-trivial refactoring, which he did reliably and well.

Larry, you will be missed my friend. RIP.

If you are waiting to reach out to somebody, don't wait too long or it may suddenly be too late. The years can slip by in a flash.


> Even just keeping the driver compilable with each kernel release often took some non-trivial refactoring,

I realise that the kernel contributors themselves are mostly volunteers and as such can just do what they want, and writers of non-upstreamed drivers will need to get used to chasing trunk, but to me this sort of thing seems like a waste of human talent. once something works, it should never not work again. it seems to me that despite all the layers of cruft that we've accumulated over the decades haven't really helped much with this, in fact they probably mean more work just to stay still.


The Linux kernel has an incredibly strong userspace API stability guarantee. I think in practice this guarantee is unmatched in modern computing, and it's what allows things like containers to deliver such incredible long term value.

Software is defined by its interfaces, and unlike userspace APIs, this guarantee does not apply to internal kernel APIs. The ability to update the latter is what enables the former. Inability to update internal implementation details inevitably leads to ossification and obsolescence of the whole system. Linus Torvalds spoke about this recently in the context of new Rust code in the kernel.


>Inability to update internal implementation details inevitably leads to ossification and obsolescence of the whole system.

No, in practice I think it leads to growing complexity, because the system then adds additional APIs (e.g., v1, v2, v3, etc.) to support new features, but has to continue to maintain the old APIs for backwards compatibility, leading to lots of extra code and a huge maintenance burden.


Yes, precisely. The escalating complexity, maintenance burden and constraints eventually cause the whole system to lose momentum, lose support and become obsolete.


Like winapi with its ...Ex and numbered functions (NdrClientCall4).


Or Linux with it's numbered syscalls (dup, dup2, dup3).


> Linux kernel has an incredibly strong userspace API stability guarantee

These guarantees are not that strong, unfortunately. For example, API between kernel-mode and user-mode halves of the GPU driver is unstable. To have 3D accelerated graphics on RK3288 you gonna need either libmali-midgard-t76x-r14p0-r0p0-gbm.so or libmali-midgard-t76x-r14p0-r1p0-gbm.so userspace library, depending on a minor hardware revision of that RK3288.


Is there some reason that the Linux kernel couldn't offer a stable API for device drivers? At least for common types of devices. Would that really lead to ossification? Stable APIs usually make it easier to change code at lower levels.


> Is there some reason that the Linux kernel couldn't offer a stable API for device drivers?

Politics. Offering a stable-ish internal API would instantly lead to hardware vendors shipping closed source drivers for Linux.

Currently, you have to be the size of NVIDIA or AMD to be able to afford a closed source driver, it's simply a huge work to keep up with the constant improvements in the Linux kernel.


Not true, there are smaller companies providing closed source kernel modules as well. Check any Android image for a device with SD card slot in the era where open source exfat driver didn't exist, and you'll likely find one of the 2 proprietary exfat driver modules.

Embedded/mobile projects don't tend to use bleeding-edge kernels and stuck on one version regardless of closed source modules anyway.


Yeah and these closed source exfat drivers were a true pain to maintain which is why Paragon (one of the vendors) was pretty pissed when Samsung's driver was upstreamed to Linux [1] - it killed off their business.

[1] https://arstechnica.com/information-technology/2020/03/the-e...


Much of windows OS weirdness can be attributed to the fact they have a relentless dedication to backwards compatibility.


Much of Linux's weirdness can be attributed to that as well -- at least the userland API.


I've struggled philosophically with that very question as well (especially having to update my own WORKING code just to conform to API changes), and I'm very torn on it. I just don't think there's a good point to pick at which to "freeze" the API and say no more changes are allowed. It would greatly hamper innovation and IMHO ultimately lead the Linux kernel having major forks or being displaced by something more adaptable. Despite the pain points, I think it's an overall good, though that doesn't stop the stinging of having to keep up. It really forces you to either be in or out. It can be very difficult to be a part time contributor.


I've always wondered about how versioning could help. i don't think it would work with a monolithic kernel, but a microkernel might allow you to run multiple versions of different things, and they would still work.

I've also fantasised about the same thing in application programming languages - e.g. the way racket runs loads of different dialects, you could also have versioning for different things running at the same time.


POV-Ray does exactly this, and warns you if you do not include a #version statement in your source file.


It not only makes it difficult to be a part time contributor, I imagine it also makes life more difficult for maintainers: before they accept a patch from a part time contributor, they always have to ask themselves "will this guy still be around to update his code when needed"?


Just accept the fact that software does age with time. If it didn't change a literal bit, the real world in runs for and the software ecosystem it runs on did. Any software older than 10 years should be rewritten from absolute scratch, not be kept on life support at ever increasing expense.


I would encourage you to read this if you haven’t: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

There are exceptions to every rule, re-writing old software because it’s old needs a rare exception indeed.


I know those: Rewrites are bad, it's harder to read the code than write, a rewrite failing Netscape and many others, whatever. I aim to share a view I know is radical and controversial, that unequivocally all software needs a rewrite a decade, provided there is finances to do it.


I disagree. I think rewrites are sometimes the right thing, but rarely.

The Linux kernel is a good example. Should we rewrite it? No of course not. So was your point about code that hasn’t been maintained in a decade? Sure, maybe then. But I can’t square “code that hasn’t been touched in a decade” with “still in active use today”


The Linux kernel is the perfect example for my case. We should absolutely create a new OS to replace Linux. Just an example, and not only my opinion, rather a widespread one: There is no need for multiple mechanisms of interprocess communication, they only add to the complexity.

Conway’s Law[1]:

> Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.

In his famous lecture [2], Casey Muratori makes the case that Conway’s Law applies over time too, meaning there is not a single organization, but multiple versions of the organization over time, and all of them end up adding more design (decisions) to the product. The best example, which he gives, is the 5 different UIs to change the sound level in Windows.

Rewrites to old (10 years old as I said) software reset the age of the software to zero and very often drastically simplify both the design and the codebase. I’m not telling you to rewrite your 3 yo product.

1: https://en.wikipedia.org/wiki/Conway%27s_law

2: https://youtu.be/5IUj1EZwpJY?t=1696


> and very often drastically simplify both the design and the codebase.

This is the major flaw in your argument. This just isn’t true.


virtual machines exist, and it might be easier to run something in them than to port them every two years.


I'm talking about innovation and progress. In an ideal world retro computing remains a hobby.


in my ideal world, innovation and progress means new software brings new functionality - not half the old functionality that barely works because "rewrite".


Progress is in the eye of the beholder. Case in point: Fiddler. A debugging HTTP proxy, originally a one-man-show; the Classic version is still amongst the most powerful tools out there. Except...well...there's no way to monetize it. So, a New and Innovated version, which is less powerful (yes, it has some new features, but at the expense of dropping the actual power tools), more bloated, and subscription-based. Progress!


Why?


Kernel contributors are mostly employees of hardware companies getting paid to contribute: https://lwn.net/Articles/941675/

It may be that in the hypothetical absence of a salary giving them an excuse, many of these individuals may also choose to volunteer, but it's wrong to call contributors "mostly volunteers" when that's their day job. Changesets from those known to be spending their own time on contributing are a relatively small fraction of the changes.


Found a small typo, but it's too late to edit the original. I meant early 2010s (not early 00s)


This is what makes me come to HN every single day.


This is what makes me come here every single minute my anti-procastination setting allows me to.


This is what makes me delete my anti-procrastination setting and quit my job to spend more time here.


What things did you do that weren't legal to make your WiFi work better?


Probably increasing maximum tx power or extending the frequency range beyond allowed in the region to enable more channels and hence being able to get decent speed/quality in a crowded area.

These things are illegal for a reason.


Increasing the cap on max transmit power. I compared the output of that card to other cards using my spectrum analyzer, and max tx on that card was extremely weak compared to the others.


Thanks for sharing your story, it meant a lot to me today.


it's really nice to read comments like this one. thanks for sharing it. don't feel too regretful or guilty, things can happen.


Arstechnica has a solid write up. He did a lot of work making linux wifi and driver ecosystem significantly better.

https://arstechnica.com/gadgets/2024/06/larry-finger-linux-w...

I remember cursing ndis wrappers and Broadcom wifi ecosystem a long time ago, Larry helped fixed that, and mentored many others along the way.

quote from the arstechnica article "In a 2023 Quora response to someone asking if someone without "any formal training in computer science" can "contribute something substantial" to Linux, Finger writes, "I think that I have." Finger links to the stats for the 6.4 kernel, showing 172,346 lines of his code in it, roughly 0.5% of the total."


(This was originally from https://news.ycombinator.com/item?id=40770724 but we merged those comments hither)


dang, could this merit a black bar in remembrance and honor?


Kudos to Ars Technica for this piece.


I always think of him every week or so for his support of a belkin wireless card FG… I had as a broke college student in 2005 trying to get linux working on my machine. His driver worked better than the belkin supported windows driver.

What do I think about? Why the hell did he volunteer his post work hours trying to do this menial job hardware thing. Did he not have a family or something? He did. And today they lost him.


> What do I think about? Why the hell did he volunteer his post work hours trying to do this menial job hardware thing. Did he not have a family or something?

I wondered the same, imagining some 40-something FTE frustrated with bureaucratic work projects and with a couple young kids. Guess what, this guy passed at 84! Which means the driver work happened in his 60+ years. I will be lucky if I can remember my name at that age. :-)


It seems he was the best in the world at broad Linux Wi-Fi driver development. That’s quite a valuable accomplishment. Thanks, Larry Finger.


Same, if not for Larry I may not still be a Linux user. Unsung hero indeed.


> Broadcom provided no code for its gear

> so Finger helped reverse-engineer the necessary specs by manually dumping and reading hardware registers

I really wanted to leave this quote here for all to see. The badassery of this act should not be underestimated.

I've done a little bit of work along those lines. Reverse engineering my laptop's features, writing my own free software to drive them from inside Linux, emailing the manufacturer asking for documentation and receiving user help pages. It was mostly USB stuff, the most well documented interface imaginable, and it was still hard. I simply cannot fathom how he reverse engineered this Wi-Fi stuff. That's just a huge inspiration for me, I hope I can get near his level some day.

Thank you so much for your work and RIP.


You might enjoy this series of articles about writing a Wi-Fi driver using reverse engineering techniques: https://zeus.ugent.be/blog/23-24/open-source-esp32-wifi-mac/


Yes I really enjoyed reading that article, thank you so much.

I'm really looking forward to seeing a brave soul show up to try and reverse engineer those 50 thousand hardware initialization memory accesses. That sort of thing is just incredibly difficult. I'd really enjoy reading about that too.

In my case I intercepted USB communications with wireshark and tried to make sense of the packets. I'd use the proprietary manufacturer app, capture what got sent over USB and correlate the packets to hardware behavior. Mapped out all the functions that way. There were hundreds of them but mercifully most of those were the same command but with different parameters. The result was this totally magical software which sends buffers containing seemingly random numbers to the hardware and somehow that makes things happen.

  /* Clevo Control Center
   * Effects
   * Wireshark Leftover Capture Data
   *
   *  0 Wave     cc00040000007f
   *  1 Breathe  cc0a000000007f
   *  2 Scan     cc000a0000007f
   *  3 Blink    cc0b000000007f
   *  4 Random   cc000900000000
   *  5 Ripple   cc070000000000
   *  6 Snake    cc000b00000053
   *
   * There seems to be no pattern to it. Does the value of the last byte matter?
   * My keyboard apparently doesn't support the ripple effect,
   * even though it is present in the Clevo Control Center interface.
   */
Mapping out 50 thousand memory accesses though? Whoever achieves that has my respect and my attention.


I've only just started learning about this field myself, but your approach looks good.

That article has some follow-ups which should be linked below it or can be googled. IIRC the author uses some clever shortcuts to speed up the work.


The new website containing all articles: https://esp32-open-mac.be/


The name didn't ring a bell, until I saw his username in the CREDITS commit - lwfinger! That username, along with "hadess" even more so, will be burnt into my memory forever. Compiling their drivers for the RTL8723BS card was the first time I got my hands dirty with the inner-workings of Linux, eventually getting a terrible little Intel Baytrail convertible to run Linux almost perfectly. I ended up posting a tutorial on the process to get Ubuntu working on it, which, assuming by the 80 start and long-running discussion below, helped a good number of people to keep using their win8.1 e-waste for quite some time.


Besides Larry Finger, another Linux kernel developer passed away this week: https://lwn.net/Articles/979617/


This was posted here too, but unfortunately didn't gain a lot of traction (maybe because the title doesn't mention who he was?): https://news.ycombinator.com/item?id=40815468


People keep making fun of the year of the linux desktop, but for me that year was distinctly around 2007 when wifi started working mostly painlessly on my thinkpad.


Larry’s work on maintaining a bunch of Realtek’s vendor drivers on GitHub was huge for a lot of the community. Even today these forks often work much better than the mainline drivers. He will be missed! As others said, he was a huge inspiration.


RIP. Thanks for the work on wifi drivers. NDISWrapper was a pain.


Legendary. Larry lead a full life. He mentored many hundreds over the years and his legacy will live on in those who he taught and inspired.


> will benefit users for years to come

/years/decades/s




We need Larry Finger for hibernation.

Please don't get me started on suspend-mode low energy consumption. It hasn't been true for any device I've owned. Perhaps yours, fine, but my anecdotal experiences and acquaintances' show you're in a minority.


> Larry Finger, and fish, from his Quora profile.

The inclusion of the fish in the tagline made me smile. There’s an innocence to the sentence that captures the image really well.


damn. this one guy changed the world. if not for him, Linux would not have had those wireless drivers, and the downstream effects of younger teens being able to use Linux and then contribute to that ecosystem and and and.

.


WiFi in the late 2000s early 2010s was a crazy place, man. Looking at the code it's wild how much he contributed to the process. Damn.


This guy deserves the black bar, certainly.


I came here to say the same, this guy pushed Linux desktop adoption forward like few others by himself and just for passion.

RIP


I would've never used Linux and probably wouldn't have got into programming without https://github.com/lwfinger/rtl8723be (it took like five or six years to get mainlined). He's a hero.



Thanks - I've changed the URL to the last one of those (from https://www.developer-tech.com/news/2024/jun/24/linux-commun...) because it seems to have the most background.


I know it violates the general guidelines to change the headline, but the new title doesn't make clear that he has passed away. I imagine there are a lot of people who might overlook it without that key part in the headline.


Ok, I've grafted on a piece of the subtitle that makes it clearer.


awesome, looks great, thanks so much!


Rest in peace. I'm still rocking Linux on my MacBook Pro from 2012 which has a bcm nic in it. Thank you.


If that is the same as the macbook air's of that year, than good luck to ya. The chipsets of that year are a horrible mess. Even in macos is it was finicky with a 1+ year bug that wifi would not come up automatically if your macbook went to sleep with bluetooth on.


Sorry to hear it. His work was a key part of my entry into the software world.


I used one of his GitHub driver repos to make wireless work on a realtek driver. Rest in peace.


What a hero. Rest in peace.


Legendary.


[flagged]


Wrong and terrible timing. There's a hell of a lot more to "Linux WiFi" than just the driver code that Larry maintained. I definitely wouldn't rule out hardware issues. I've seen race conditions in driver code before, but it's quite rare and Larry was really good at identifying and elimianting those. If your laptop is workable, pop the cover and make sure the wifi card connection is fully seated. Also check the dmesg logs early and often after boot and make note of anything related to wifi. Chances are good that if it's not a hardware issue, there's something in there to help point you in the right direction.


Probably the most common reason for the integrated wifi card becoming “non-deterministically non-recognized” is that the OEM fucked up the rf-kill in that there are slight differences between what the EC firmware and ACPI expect and how the hardware is implemented (missing pull-ups, EMI induced into high-impedance logic trace somewhere…).

Anecdotally, on any laptop that I have seen with these kinds of issues (basically every new Intel laptop I bought between 2017 and 2020, which includes lot of weird no-name tablet-in-laptop-formfactor crap, but also ThinkPad X270), it progresses such that the WLAN becomes unusable in Windows but you would hardly even notice in Linux, where it just works.

[edit: obviously comparing Windows to Windows is nonsense, it works in _Linux_]


Your comment is rude and unhelpful.


I thought I wrote it nicely but I guess not

I can’t think of a more powerful engineering endorsement than saying he did a Herculean effort to manually keep it all working

Basically we’re all screwed now cause he was holding the whole thing up


This is a memorial thread, not a "rant about Linux WiFi thread". You're acting like a complete moron and asshole. This comment is the polite and sanitized version of what I would actually like to say.


:(




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: