This is a deeply pessimistic take, and I think it's totally incorrect. While I believe that the traditional open source model is going to change, it's probably going to get better than ever.
AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.
Funding for open source projects has been a problem for about as long as open source projects have existed. I'm not sure I follow why you think specifying donations will go towards LLM tokens will suddenly open the floodgates.
I did. Your argument seems to be that LLMs allow users who want specific features to direct a donation specifically towards the (token) costs of developing that feature. But I don't see how that's any different from just offering to pay someone to implement the feature you want. In fact, this does happen, eg in the case of companies hiring Linux devs; but it hasn't worked as a general purpose OSS-funding mechanism.
Because offering to pay people to implement features is very expensive and tends to take a long time, if they do it at all. Often, they can't even find people to pay to implement things.
In the case of companies hiring Linux devs, that is is very, very costly and thereby inaccessible. Scale makes it different from the scenario of paying a few dollars to contribute tokens to fix a bug.
It seems the assumption you're making without justifying is LLMs will significantly reduce the cost of software development. Even if LLMs can reliably write new features (or even just fix bugs), the maintainer still needs to spend time (which is not free) verifying and code-reviewing the LLM-produced code.
There are a few valid arguments that I see to support the pessimism:
1. When people use LLMs to code, they never read the docs (why would they), so they miss the fact that the open source library may have a paid version or extension. This means that open source maintainers will receive less revenue and may not be able to sustain their open source libraries as a result. This is essentially what the Tailwind devs mentioned.
2. Bug bounties have encouraged people to submit crap, which wastes maintainers time and may lead them to close pull requests. If they do the latter, then they won't get any outside help (or at least, they will get less). Even if they don't do that, they now have a higher burden than previously.
Bug bounties had this risk from day one. Any time you create a reward for something there will be people looking to game it for maximal personal benefit. LLMs and coding agents have just made it that much easier to churn out "vulnerability" reports and amplified it.
But locally, dollars are a zero-sum game. Your dollars came from someone else. If you make a project better for yourself without making it better for others you can possibly one-up others and make more dollars with it. If you make it better for everyone that's not necessarily the case. You're just diluting your money and soon enough you won't have money and you're eliminated from the race.
While I'd like to believe in the decency and generosity of humans, I don't get the economic case of donating money to the agent behind an OS project, when the person could spend the money on the tokens locally themselves and reap the exclusive reward. If it really is just about money that only makes sense.
Obviously this is a gross oversimplification, but I don't think you can ignore the rational economics of this, since in capitalism your dollars are earned through competition.
Why would people/companies donate more money to open source in the future that they don’t already donate today?
It’s a tragedy of the commons problem. Most of the money available is not tied up to decision makers who are ideologically aligned with open source, so I don’t see why they’d donate any more in the future.
They usually do so because they are critically reliant on a library that’s going to die, think it’s good PR, makes engineers happy(don’t think they care about that anymore), or they think they can gain control of some aspect of industry(looking at you futurewei and the corporate workers of the Rust project)
Because donating to open source projects today has an extremely unclear payoff. For example, I donate to KDE, which is my favorite Linux desktop environment. However, this does not have a measurable impact on my day-to-day usage of KDE. It's very abstract in that I'm making a tiny, opaque contribution to its development, but I have no influence on what gets developed.
More concretely, there are many features that I'd love to see in KDE which don't currently exist. It would be amazing if I could just donate $10, $20, $50 and submit a ticket for a maintainer to consider implementing the feature. If they agree that it's a feature worth having, then my donation easily covers running AI for an hour to get it done. And then I'd be able to use that feature a few days later.
1. You can already do that it just costs more than $10.
2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.
Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.
Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.
> Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.
The comment you responded to is (presumably) talking about the transition phase where LLMs can help implement but not fully deliver a feature and need human oversight.
If there are reasonably good devs in low CoL areas who can coax a new feature or bug fix for an open source project out of an LLM for $50, i think it’s worth trialling as a business model.
Did you skip the first part of my comment where I specifically addressed that.
Even if the human is only doing review and QA, there’s no low cost of living area where $50 get you enough time to do those things from someone with enough competence to do them. Much less $10.
Yea, that’s the ideologically not aligned part I referenced.
If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.
As a maintainer of a medium size OSS project I agree. We've been running the produce for over a decade and a few years back Google came out with a competitor that pretty much sucked the air out of our field. It didn't matter that our product was better, we didn't have the resources to compete with a google hobby project.
As a result our work on the project got reduced to maintenance until coding agents got better. Over the past year I've rewritten a spectacular amount of the code using AI agents. More importantly, I was able to construct enterprise level testing which was a herculean task I just couldn't take up on my own.
The way I see it, AI brought back my OSS project that was heading to purgatory.
EDIT: Also about OPs post. It's really f*ing bug bounties that are the problem. These things are horrible and should die in fire...
> AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours.
I think this is true, but misses the point: quantity of code contributions is absolutely useless without quality. You're correct that OSS programmer hours are the most scarce asset OSS has, but AI absolutely makes this scarce resource even more scarce by wasting OSS programmers' time sifting through clanker slop.
There literally isn't an upside. The code produced by AI simply isn't good enough consistently enough.
That's setting aside the ethical issues of stealing other people's work and spewing even more carbon into the atmosphere.
MacOS is the "it just works" operating system. As such, I think the moment that you need to declare custom workarounds like this, it kind of loses its legitimacy, and you should already be in Linux land.
I abhor the current state of macOS and Tim Cook’s leadership, but your take is nonsensical.
For one, “it just works” hasn’t been used in over a decade, same as Google’s “don’t be evil”, which does tell you something about their current philosophies.
But more importantly, “it just works” was obviously never about it “it reads your mind and does every software feature however you personally like”, it was about the integration of hardware and software and not having to fiddle with drivers and settings to get hardware basics working.
While I get the butterfly keyboard hate (though mine is so far still perfectly fine) the USB-C ports were amazing. I have a 2016 MBpro and that thing still cooks really well. As somebody who worked in video production those ports were a godsend. No more waiting around for footage to transfer all the damn time. Complete game changer. Plus with one or two quality docks I could plug-in literally anything I ever needed. With the AMD GPU i could also edit pretty beefy 4K with no proxies most of the time. In 2016/2017 that was pretty awesome. Plus last good intel machine they made IMO, so good compatibility with lots of software, target display mode for old iMacs, windows if I wanted it, etc.
Probably my favorite laptop I’ve ever owned. Powerful machine, still sees work, runs great.
It introduced USB-C before it was ubiquitous even on smartphones, at least in my area. All the peripherals still needed a dongle, it was the dongle era. The keyboard was okay to type on once I got used to the short travel, but the keycaps easily broke off, and dust would get in and the keys wouldn't register. Also, the whole laptop would get very hot, at least the 13" pro without the touchbar. I prefer the older 2015 model, before the butterfly, that's the one I had at work but had to give it up, and I regret waiting for the new models instead of purchasing the same one.
Like I said, totally get the keyboard hate. Mine just turned out perfectly fine.
People hated the dongles but again I could hook up everything. Dozens of connections with throughput I could never get before. It was fantastic for my needs and still is!
Compared to my old NixOS with tiling window manager, I’d say MacOS panes just doesn’t work. I have Rectangle, but it’s no comparison to the full tiling experience. I switched for Apple Silicon nothing more
I use Aerospace and it's an okay but not great tiling window manager. Note that AeroSpace really is among the best on macOS, but I'm guessing the OS APIs simply don't expose enough hooks.
Most people wouldn't touch "NixOS" or a Linux-style "tiling window manager" with a 10-ft pole, though. For them, the tiling window manager is a good in-between.
what is the full tiling experience like? I was never a tiling WM guy, on Linux I'd just set some KDE shortcuts for moving and resizing windows. On macOS I used Spectacle and then Rectangle but not sure what I am missing out on, I was always content with Spectacle
Even if this was a "custom workaround" this argument would be extreme "all or nothing" binary thinking.
An OS can "just work" for of the stuff a user does, and just need some tweaking here or there. Doesn't mean if the "just works" stuff is not 100% you're just as good going to Linux.
Anyway, this is not some "custom workaround", it's a regular Apple-provided macOS toggle. It's just not exposed in the UI, because for most users, the regular way "just works". I know all kinds of "defaults" toggles, and barely use 1/100 of them, because the actual defaults are fine.
But, believe it or not, is very customizable (and previously very scriptable). I have Shift+Command+M (maximize) bound to resize to fit the content (different from full screen in macOS). Anything that’s in a menu can be bound to a keyboard shortcut without any additional utilities.
I kind of agree with you, but on macOS I still don’t have to ever think about drivers. The hardware just works. Linux isn’t quite there yet. My work XPS laptop running Ubuntu is close, but not quite the same.
Yes, the mac user faces incredible disillusion when he discovers that "just works" was just another marketing gimmick (to the likes of it doesn't get viruses!)
As a long-time Mac user, "it just works" actually meant "it either works or it doesn't" - a *binary*. Whereas other OSes were shades of grey - it _might_ work if you spend time searching and trying random combinations in settings.
As a 20+ year heavy mac AND linux user, both are true.
It doesn't get viruses, especially if you don't install random junk from warez sites and stick to MAS, brew, and a few trusted vendors. Even if you do install crap, it's trojans not viruses, which are more like the Yeti (something like that might exist, but few have seen it) than a problem mac users have.
And things "just work" way way way way more than they do in Linux (and I've started using it professional as desktop and for dev work in late 1990s, I'm not weekend tourist to it), which is exactly what I expected as a pragmatist. Only some non-existing carricature user that exists in strawman arguments expected everything to be perfect.
The "they don't catch viruses" is a bold lie, but back then when i worked in tech sales the apple promoter wanted us to repeat the lie ad nasueam. They definetly catch malware and it's as easy as in any other platform (also because today the malware will likely be running in a headless chromium instance)
I've had a macbook since 2010, and to me its software quality has been going downhill since snow leopard, today it's completely unrecognizable.
I think apple jumped the shark more or less in 2012 with the flay layouts, when they also started changing ages old defaults, hiding and then removing features for power users, too much handholding and telling you what's best for you, things like that.
My macbook from that era is still with me, but it runs debian now, same as any other PC i use for work or leisure, and it's really so much better for me as a programmer and as a user. Freedom. It's really freedom (and KDE's ergonomics really clicks with me). I recently had to install unsigned software on one of our worplace's mac minis (which i'm glad i don't have to use anymore) and it was so incredibly frustrating i wanted to smash that thing.
>The "they don't catch viruses" is a bold lie, but back then when i worked in tech sales the apple promoter wanted us to repeat the lie ad nasueam. They definetly catch malware and it's as easy as in any other platform (also because today the malware will likely be running in a headless chromium instance)
Malware is not a virus. And it doesn't catch malware if you keep to trusted sources and keep on OS protective layers like SIP.
Install junk from warez sites and the like, and YOU installed something (still not a virus: a trojan). If you couldn't install it at all (also totally possible) you'd be crying how macOS restricts you.
In over 20 years of OS X use I've never had any virus, nor did anyone I know. Over 30 years of Windows I've had plenty.
>They definetly catch malware and it's as easy as in any other platform
If you install it, it's not a virus (and you can't avoid that in any OS, unless they lock you out of arbitrary program download and execution and only have you run in sandboxes).
Even so, you can very well install and not give it privileges, and then it can't even touch important directories. If you install it && enter your admin credentials to let it do whatever, it's on you.
>I've had a macbook since 2010, and to me its software quality has been going downhill since snow leopard, today it's completely unrecognizable.
It has, but that has nothing to do with now allowing viruses or even malware (in fact, regarding the latter, is more secure than it was in 2010 via multiple measures).
Windows is also the "it just works" operating system, and it has hundreds of useful things you can only do through registry hacks.
It's not a very useful test.
I look at the good things about macOS over desktop linux like how cmd-c/v works across all apps, and it would be amazing if it were just a cli command to bridge the gap.
It's the ambition as a home user OS though, like macOS. And in the discussion of "it just works" operating systems, who else are we to go by than the vendor ambitions? Personal opinions? In that case, neither is because both struggle to always work in all scenarios since their respective inceptions.
When the phrase originated, manually updating CONFIG.SYS and AUTOEXEC.BAT were expected skills of a home PC owner. The idea of buying a device, plugging it in, and having it work without a complex setup was unheard of. "It just works" on the Mac meant the absence of a DOS layer, IRQs, command lines, etc.
AFAIK Windows has never been known or marketed as "it just works". It goes long way to maintain backwards compatibility, but lets not kid ourselves that it has any semblance to what Apple's "it just works" is supposed to mean.
I wrote about something very similar a long time ago.[0]
The key problem is that most contemporary web design does not follow any idioms. Idioms are conventions of design that are universally understood. Skillful use of idioms makes it much easier to parse what is going on on a given page.
Where we are with most applications is that they try to define their own idioms, i.e. their own icons, their own navigation patterns, etc. But this is very arrogant because they're assuming that the user has the time to build that familiarity with all those idioms. This is never the case.
Every day I use web applications from nominally mature companies, and they have totally different icon sets for the same actions. This is immensely distracting and hard to read. Every company sees an opportunity to define their own icons, when what they should be doing is using the exact same ones as everyone else because that makes it easy to understand.
But that's the point of the article. The state is failing in all of these dimensions, while state tax revenues and budgets have nearly doubled! We have more spending, but it's not fixing the issues. Many Silicon Valley people are upset about the ineffectiveness of this spend.
Now the spend is going to go even higher, driving out Silicon Valley in the process, but it will not achieve any of the objectives. In fact, it may be destructive to California on a whole.
Thank you for this comment! I am not a resident of California, I just read the news and the complaints I hear from people who do live there tend to sync up. I still have very little sympathy for the calls to outrage but thats a much more reasonable perspective.
Leetcodes are fun! You should find pleasure in solving puzzles and figuring things out. Consider yourself lucky that the interview process contains a part that is basically a game that you can get good at by memorization.
I genuinely don't find it fun to solve puzzles unless they have an application/ end goal in mind. Tell me to find cycles in a graph as a puzzle and I'll roll my eyes. It's worse if you ask me to do a topological sort for detecting cycles using some named algorithm.
Ask me maybe to verify that a CI verification sequence is valid, I'll probably be interested.
I understand that leetcode problems can be abstractions of everyday problems you might deal with at work. But I find them too academic, robbing people of rich context of actual problems. They don't teach you about how to draw equivalences between actual problems and their models.
That's exactly why they don't make much sense as an interview process. You don't need to be thrilled by puzzles to be an effective developer. Also if you reach the goal of solving problems by memorization, I'd be more concerned about how you communicate about your ideas to others and write code that's understandable and maintainable.
European manufacturers all decided to focus on higher costs vehicles after Covid-19 because margins are slightly higher and they make more on the financing. They have intentionally deserted the entry market.
Now, sales numbers are starting to plummet so I fully expect to see them blame everything from regulators to China unfair exports rather than admit it’s just a normal consequence of their own strategy.
Add to that that most of them have intentionally not taken the shift towards electric and away from diesel that the regulation forced on them, you get a pretty bleak picture. But, on that point, it seems that Germany will as usual cave in and drag the whole EU down with them so they might have been right.
AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.
For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...
reply