When it comes to understanding large organizations I think a simple principle should apply:
The Purpose of a System is What it Does[1].
Whether malicious or not, the system does what it does. If people wanted it to do something else they would change the system. The reality is that when corporations make mistakes that benefit them those mistakes rarely get fixed without some sort of public outcry, turning the "mistake" into a "feature".
Intriguing concept, but I feel it needlessly breaks language. A more narrow (and to me, less pompous) formulation would be that social groups have their own purpose, different from (though not unrelated to) the purposes of the individual members. And this collective purpose can be read best from the actions of the collective, just like the purpose of a person is best divined from their actions (actions speak louder than words).
It really has been remarkable watching GitHub just crumble as an organization. There's a lot of discussion about why: the switch from being independent to being part of Microsoft, having resources pushed to Copilot instead of core service, the organization structure itself, a reliance on vibe coding, etc etc.
Regardless of the reason, it's undeniable that GitHub is facing some serious issues. The unofficial status page[1] tells a horrifying story.
I would absolutely love to get some insider perspective on this (if only to learn how to prevent it from happening anywhere I work), but I think it's clear to anyone who has been paying any attention that GitHub is a sinking ship and the only reason people haven't abandoned it already is inertia. Considering how much else is changing in software right now I don't think inertia is enough to sustain a company.
I do not work at MSFT but I don't feel that I need insider perspective to understand what's going on. GitHub is being managed the way other services get managed once they're bought by big companies. Initially fine, then starts to decline, then eventually craters. Everything becomes the numbers game.
Microsoft, Oracle, VMware, CA (where software goes to die), Salesforce, the list goes on. Every once in a great while there's a good M&A team that doesn't fuck it up but that's sadly rare.
I feel like MS went out of its way to make a point that GitHub and NPM would be independent orgs that no longer had to worry about making keep-the-lights-on money. It was positioned as a benevolent acquisition for the good of the development community.
As so often happens, that didn't last long.
Nest was originally independent. Didn't take long for it to merge with the Google Home brand.
The problem IMO is that they filled GitHub with Microsoft folks who just don't have the engineering self-sufficient hacker culture that is required to balance the "attraction park" vibe that GitHub paired it with. So now it's just an attraction park for Microsoft employees to go and do silly work with teams of 100 that should have been done by a skilled team of 5 hackers.
I was there for a couple years after the acquisition and just couldn't stand seeing it. I felt I was becoming useless working in a mad house that was becoming more maddening everyday. And MSFT just keeps replacing leadership with more and more disconnected people who just don't get it, who just never used GitHub like the OG users did. Two years ago I interviewed again for my old team, largely out of curiosity, and the Microsoft engineering manager asked me some brain teaser question as my interview. The disconnect is just too large.
They don't take GitHub seriously. It's a toy to MSFT and vibes matter more than the product itself. And they hire for it using MSFT drone logic, fill it with people hired and profiled to be MSFT-lifers, and these two things don't mix.
Sorry I don't have anything great to say. And of course, many of these MSFT folks were actually damn good, but they were swimming in a sea of MSFT drone.
> would be independent orgs that no longer had to worry about making keep-the-lights-on money
It is honestly so shameful that we keep falling for this gambit. It is nothing more than a rank "but this time is different!"
Economics is what drives things. It is what drives things in households and it is what drives things in companies.
Unless times are truly great or the company is truly forward-looking, promises of freedom and independence from the business cycle is just an empty promise of creating a research lab.
What do you mean "we keep falling for it"? I remember after the acquisition there were tons of projects that left for Gitlab or other forges on principle of boycotting Microsoft. And for the many who stayed on Github, we still got about 6 years of pretty great free services before reliability really started to decline.
And its not like Github's load stayed linear over the last 8 years since the acquisition. Repo creation and pushes went exponential about 2 years ago with the AI boom, so even with fantastic execution I think they'd still be struggling hosting the ever expanding archive of all code in the world.
I remember discussions at the time where people predicted that this would certainly happen. If people “keep falling” for it, it’s not the same people. And Microsoft certainly wasn’t (and isn’t) a company you’d trust for such statements.
This Disney brain of the Americans is what the problem is. It's not good guys and evil guys. It's money. Money and power have mechanisms. Pinky promises, benevolence etc. don't mean anything in capitalist business. It doesn't mean it has to be all thrown out the window. It can provide a service for a price, you can take it. Without being invested emotionally, without brand loyalty. That's dummy stuff. Businesses are not charities, and even charities are often quite businesslike. Unlearn naivety, read literature, human culture has known about the effects and incentives around money and power, petty and grand, for a long time.
Neither me nor dozens of my acquaintances fell for it. 100% of us said "GitHub is toast, it's just a matter of time". And we and many others were right.
> It is honestly so shameful that we keep falling for this gambit.
I'm not sure who "we" is in this story, but the _most_ optimistic of my peers pointed to typical MS projects of that scale having a little proper investment in interesting features and also taking at least a couple years to fail. HN sentiment wasn't positive either. The 99th percentile in favor of MS were fine with it, but the 90th percentile recognized the M&A for what it was, especially as specific features started showing their colours.
Lest this come across as a drive-by insult, I'm actually very curious who "we" is. Humanity is a very, very broad spectrum, and my intuition often doesn't appropriately capture the divers backgrounds of real people, despite spending large amounts of time with (usually from working alongside) deck-hands, captains, sanitation workers, bankers, pilots, jackhammer operators, semi drivers, farmers, programmers, mathematicians, and a host of other people. The gap I'm seeing is likely in my understanding (rather than, e.g., the post being mal-formed), so I'd like to correct that.
GitHub had no reason to sell to Microsoft, they could have remained the bootstrapped company they started as, and rode the SaaS boom, since they were profitable on day 1. Seems a bit unfair to blame Microsoft though, because it was the founders who decided they wanted that sweet VC funding and Andreessen was happy to pay out.
Not sure if it mattered after that but they had that weird Tom Preston-Werner scandal that got him fired. Since he was the CTO, I kind of suspect that sent them on a collision course with needing to exit the VC round and Microsoft paid out.
> I feel like MS went out of its way to make a point that GitHub and NPM would be independent orgs that no longer had to worry about making keep-the-lights-on money. It was positioned as a benevolent acquisition for the good of the development community.
Sorry, why the hell would any company do this? Goodwill is measured by MBA's in pennies and they paid *billions*.
This is like that time Google acquired the .dev domain "to reserve it for developers" but then ended up selling it- like everyone said they would.
You have got to be absurdly naive to believe that Microsoft does altruistic things, if there's no direct (or indirect) business they're not doing it: they are famously not a charity.
They have to be the richest men on the planet, retire, and then they'll do some charity.. perhaps because they feel guilty about what they did on an island or something.
This happens with almost every acquisition from Red Hat to WhatsApp.
If companies actually meant it then they’d sponsor these projects instead of buying them. The reason they choose to buy is so that they can make decisions about the direction of that project. If not immediately, then at least at some point in the future.
> I feel like MS went out of its way to make a point that GitHub and NPM would be independent orgs that no longer had to worry about making keep-the-lights-on money
A lot of companies say that when they acquire another, and it might be true for a few years, it might even be the actual intention of the people involved in making the acquisition, but it usually doesn't last.
> It was positioned as a benevolent acquisition for the good of the development community.
call me a skeptic, but can (and has) such a model existed in a capitalist system?
It has never existed. Everything a business does is for power and profit, including the times when they pretend they aren't doing things for power and profit.
This is a general observation, no hard data, but I find there seems to be a wall at 2 years after an acquisition. By 2 years a lot of the best talent leave the company entirely or go somewhere else in the company. Things can cruise along just fine for a bit, but as the institutional knowledge slowly leaves it gets worse and worse. Couple that with the bureaucracy and insanity of a global mega-corporation, the quality fades slowly at first, then it picks up.
> I find there seems to be a wall at 2 years after an acquisition.
It's called a vesting schedule. ;)
What I've seen is that usually the founders and heavy hitters from the original company are very BS-averse and basically just stay around to collect their money and then jet for a situation that doesn't suck.
For the rest of the gang, it tends to bifurcate: some folks stay at the big company indefinitely after the acquisition because while they can see the suck, nowhere else pays as well or is as cushy (I know people who have been thinking about leaving for 12 years). Still others excel at big company work and make a happy career out of it for a while but don't stay forever.
This is the flipside of MBA-brain. Treating people as replaceable equivalent cogs in a machine, thinking that the company itself, as an abstraction, is where value lies, when it lies just as much in the context and nurishing environment. You can't simply move a company from one place to another like a Lego brick and expect it to go on functioning as before, not as long as people have freedom to leave.
> but as the institutional knowledge slowly leaves
I’d like to offer a different perspective: the “institutional knowledge” often (but not always, of course) are the old timers that have been gatekeeping knowledge in order to make themselves irreplaceable.
I’ve seen this a couple of times, even in faang-sized companies.
I’m not sure this is the case of GitHub though.
It might be due to lower quality code spit out by some llm, reviewed by some llm and shipped to production by some llm-generated pipeline.
Also, wasn’t github pushed to move to azure?
Anyways, it surely is a strong signal of engineering culture degrading.
I'm afraid this is a form of reversion to the mean. Successful startups are made of exceptional people: the founders, the initial investors, the first employees, the first clients. But when they get acquired by much larger companies, they are necessarily diluted in pool of people that are more "normal", less exceptional. This includes the customer base that is more "normal" as well. Slowly but surely, the extraordinary product/service the startup has been developing reverts to the mean. This is quite sad, because it feels inevitable. I'd like to know how to avoid it.
To paraphrase a popular quote from IBM: “Executives and MBAs can never be held accountable: therefore executives and MBAs must not be allowed to make decisions.”
Slightly less flippant: The only way to stop this is to stop letting companies like MSFT gobble up smaller companies. That doesn’t seem likely in the near future, though. Once the Borg assimilate something, it’s just a matter of time before it’s digested and drained of value.
The process is necessary for both sides. Acquisition by large companies is the primary way that people get rewarded for building good things. If you take it away, there won't be many startups left - all new developments will come from the big companies that can afford them, and only the types of developments those companies' managers want to make.
It's only "necessary" if one accepts that the current way is the only way.
I'm not really sure what the point of encouraging new development is if the end result is "big company scoops it up and makes it shitty, but people get to enjoy it for a few brief moments before that happens."
That could be A problem, but to me THE problem is that the larger companies buy these smaller companies for resource extraction, not to make the product better.
In this frame you can see that making the product worse (paying less for its upkeep) and raising prices are just two sides of the same coin - extract more resources.
Almost no big company has any reason to shepherd a product in a way that's beneficial to its users because they have so much momentum that even changing their approach either costs too much money or those in power are too insulated from the outcomes (fix it for me or I will fire you while I continue to make bad choices and under fund the product).
It's not inevitable that the founders have to sell to big tech. They wanted money more than the excellence of the craft. They got the money, the company got to grow and made way more profit than when it was small scale but excellent. The wheel keeps turning.
I was referring to the case where the founders and investors sell the startup to larger company. Of course, if they don't sell, and the company stays founder-led, the outcome is often better. I didn't know Zoho never took (serious) VC money.
It's very profitable in the short term, and later they can just move on elsewhere and do it to another company. It's not mismanagement at all, it's a solid strategy from the external point of view.
> GitHub is being managed the way other services get managed once they're bought by big companies. Initially fine, then starts to decline, then eventually craters
Can you explain what you mean by this? Like what does "fine" mean? What, specifically in the management, is the "decline"? What does "craters" mean?
In their defense, they dramatically "over"-report sev-2/3's (things like, avatar urls are not loading in saudi arabia), which makes their cumulative uptime look much worse than it is.
If you filter for major/critical outages, their uptime of core services in trailing 12 months all have two 9's.
Also, a huge part of their cumulatively-bad availability story is copilot, which is a functionality (LLM inference) that most organizations have struggled to get two 9's of availability in for the last 9 months.
As someone who relies on it for all of my workflows at a normal job, core functionality issues result in me not being able to get work done on at least a weekly basis reliably at this point, and it's been that way for months.
The things aren't profile pictures not loading in saudi, they're botching merge jobs, git/api operations being down, pull requests not loading, etc. And that's on top of the plethora of UI bugs that have been pervasive for years that aren't blocking functionality.
Two 9’s? You have to work pretty hard to do that badly. That’s like bragging you graduated with a C average from Harvard after your father endowed a chair to get you in.
Given GitHub has become a utility service globally this should be frankly worrisome to everyone let alone the developer community actively using it. It’s intertwined into many things now beyond simply source code hosting and PRs. And I am surprised GitHub leadership is ok with the state of things. Having worked at a lot of 5-6 9’s shops, this would have been all hands on deck, all roadmaps paused, figure it out or perish sorts of stuff.
We don't have to let it be a utility service. It's not like the power and water to your house where laying new pipes is a monumental and stupid effort. $3 per month can get you a VPS to run your git hosting on - if you even need git hosting, and aren't just using GitHub because it's there.
Some years ago I wondered how long will it take them to go they way sourceforge went. Once you grow too much without a proper leader, you will fall :(.
Sourceforge died in a very different way though. Bundling spyware/crapware in install packages for open source software was a serious breach of trust, and was pretty much the direct reason for mass migration to Github. Github is failing on the technical side, but they at least mostly have their brand value intact. I think it will still take quite a lot for a mass migration of the same scale to happen.
Microsoft specializes in taking successful products and pumping them full of malware, spyware, bloatware, and adware once they have a critical mass of users. It is often preceded by quality dropping significantly due to under investment and McKinsey being brought in to find a way to prop up declining revenues - of course the answer is never to invest in making it a superior product again, but monetization strategies.
Comparing GitHub and SourceForge as if they were cut from the same cloth is laughable to me. SF has always been a wretched hive of ads and dark patterns.
Not popular. Core. It was the trusted place for open source software. Then it was ads. Then the day they bundled there was a MASS exodus. And the 14 people who ran their own source code interfaces scoffed and said "see. I told you." And we all said "yup" - we knew something would happen one day, but that was a worst-case-scenario that few thought was even a remote possibility.
> And the 14 people who ran their own source code interfaces scoffed and said "see. I told you." And we all said "yup" - we knew something would happen one day, but that was a worst-case-scenario that few thought was even a remote possibility.
And nobody learned their lesson and they all piled over to the next centralized system that offered "FREE!".
I mean, we got ~15 years of great service out of them for free. I used to pay for my own servers in colo for all the stuff Github has been providing for free all that time. It'll suck to move, but I've done it before. It's hard to turn down the loss leader they want to give me, when it's a really good product. Now that it's stopped being a really good product, maybe it becomes easier to turn down, I dunno.
Given SourceForge only hosted Open Source software, and had no source of revenue beyond ads and sponsors for quite a long time, AFAIR, I think they get a pass on a banner ad.
For whatever it's worth, which is probably not much, I'm in my late 40s and I never really liked sourceforge either. Too many clicks to do anything (still true), and I didn't like cvs (also still true, but thankfully now irrelevant).
(My SF account dates from June 2004. I expect I was thinking about using it as version control for a FOSS project I was working on at the time, though I don't know why, as it seems SF didn't support svn until 2005. Maybe I couldn't find any better options? The pre-GitHub ecosystem was pretty bad! But, luckily, I ended up not having time for any FOSS stuff from about autumn 2004, so: problem solved. And when I next looked, in early 2010, everything seemed to be git+github, and all the better for it.)
CVS was the best option when SourceForge began, and Subversion was barely an improvement. SourceForce was critical to the growth of Open Source and Free Software in the 00s. Projects no longer needed to maintain their own revision control server, file server, forum, issue tracker, etc. SF.net wasn't great compared to any of the current generation of hosting services. And, most Open Source projects were in an uncomfortable state of looking around for alternatives by the time Github arrived in 2008, because it was slow to adopt newer technologies and was running on a skeleton crew. Most of my projects had their own forums/issue trackers, and were self-hosting git, by then. Ads stopped being a usable revenue strategy, so SF.net stopped being able to keep up with what developers wanted.
But, it had a few years where every OSS developer I knew had nothing but positive feelings toward SourceForge. It gave one of the projects I work on thousands of dollars worth of transit over the years. It's hard for folks who've only ever worked on an "everything for small developers is a loss leader" internet to understand that we used to pay for and manage our own servers. I had a $200/month bill for just my Open Source projects on a couple of colocated servers.
Yes, SourceForge went through a lot of shitty stuff. The overtly hostile stuff (adware inserted in OSS projects) happened after it changed hands. But, when the revenue of their original model dried up and they couldn't stay on top of new development (being slow to offer a good git experience was a fatal mistake).
Anyway, it's not great now (though it is now owned by seemingly decent folks, who haven't really been able to find a way to make it work), and it went through a period where it was a borderline criminal enterprise, but it started out as a genuinely helpful part of the OSS community.
Not always. Before dice bought them they didn't do the ads. I even remember early on when you had to submit a project for approval before you got a CVS repo.
> SF has always been a wretched hive of ads and dark patterns.
No, as others have said it wasn't always that way. And more importantly it's not that way now. But yes, for a while there it was the epitome of enshittication. How that worked out is kinda hopeful in a way: it went broke, was bought out, and went back to being something usable again. In fact they added a lot of enhancements.
I know because I was one of the ones that went back to it. I didn't like having git being forced down my throat, and sourceforge was one of the few left that supports a whole pile of VCSs. I made my Makefiles [0] support it so I didn't need to deal with the UI, and ended up very happy.
Everyone wondered why I wasn't on github, and queried my sanity at the time. But I say not liking the git cli is perfectly sane. Now jj has come along that excuse has gone, but it was a good one at the time. "github sucks" has conveniently come along as a reason to stay on sourceforge.
Seriously, if you are considering ditching github, take a look at sourceforge. The current owners BizX deserve some reward for the time they've put into it, and their patience.
Have those outages actually been blocking your work? Somehow I haven't even noticed, just seen complaints on HN. I'm not saying it's not real, just wondering where the gap is.
A big part of my job is doing code reviews, and its very common that pages or diffs just don't load. Or PRs literally don't appear in the PR list, even though they exist. It's a daily occurrence to play the 'is my internet down or is GitHub just being shit again?' game.
Oh, and don't forget the cases where the diff view sometimes misses some files for unknown reasons. Both in the 'new experience' and the 'legacy view'. You just can't trust it as much anymore.
Yes, many times. Roughly once a week this year my team or an associated team can't ship changes because PRs, GitHub Actions, or some other associated mechanism is down.
All of that is revisionist history at best. GitHub was a pile a shit long before Microsoft bought it has everyone forgotten when it would be a coin-flip on any given day if the site was even functional?
GitHub was in the right place at the right time to be a success despite the fact it's a massively clobbered together mess.
While I wouldn't necessarily phrase it this way, there is a chart going around social media that tries to imply that GitHub had basically 100% uptime right up until the MS acquisition. All it takes is either 1) having been there or 2) a cursory search of HN to know that this is a complete fabrication.
Hm. I read that as saying that their users are writing more code with the assistance of LLMs, thus placing more stress on their systems. I do not read it as making any comment about their own practices.
In our internal metrics you can see a clear increase in PRs and CI runs in general that tracks with agentic coding adoption, and it's significant, so I absolutely buy that GitHub would be struggling to take the brunt of that without big changes
A charitable view might be that changing which fingers you're using to plug the holes in the dike is a lot harder when the volume of water on the other side is increasing exponentially.
I think that is exaggerating things a bit... GitHub is alive and well, and they're hosting more and more projects each month. A few well-known projects leaving every now and then doesn't exactly spell doom for GitHub
Even if you go service by service you're talking about critical things like `git` operations (literally what they're named for) at a single nine, and stuff that's pretty basic like static web hosting as only two nines. They literally can't even keep static webpages up.
So what? People have to unlearn this kind of brand loyalty. Companies are not people and not your friends. They are in the business of making money. We need to be more aloof and simply use their tools when useful and not get emotionally attached. Most of the managers and likely the devs had a good deal. Good money, and if it collapses, people still have a good resume line and can move on. And we users can also move on. There are plenty of other service providers of code hosting and CI/CD.
i feel like scaling is rarely brought into the conversation. It’s easy to hate on ms, especially with their AI slop narratives, but they did get a sudden and then ongoing influx of users that the system was not designed to handle.
It’s the kiss of death and it’s not anything new in terms of product failure scenarios
That cost that you're talking about doesn't change based on how long the session is idle. No matter what happens they're storing that state and bring it back at some point, the only difference is how long it's stored out of GPU between requests.
Are you sure about that? They charge $6.25 / MTok for 5m TTL cache writes and $10 / MTok for 1hr TTL writes for Opus. Unless you believe Anthropic is dramatically inflating the price of the 1hr TTL, that implies that there is some meaningful cost for longer caches and the numbers are such that it's not just the cost of SSD storage or something. Obviously the details are secret but if I was to guess, I'd say the 5m cache is stored closer to the GPU or even on a GPU, whereas the 1hr cache is further away and costs more to move onto the GPU. Or some other plausible story - you can invent your own!
Storing on GPU would be the absolute dumbest thing they could do. Locking up the GPU memory for a full hour while waiting for someone else to make a request would result in essentially no GPU memory being available pretty rapidly. This type of caching is available from the cloud providers as well, and it isn't tied to a single session or GPU.
> Storing on GPU would be the absolute dumbest thing they could do
No. It’s not dumb. There will be multiple cache tiers in use, with the fastest and most expensive being on-GPU VRAM with cache-aware routing to specific GPUs and then progressive eviction to CPU ram and perhaps SSD after that. That is how vLLM works as you can see if you look it up, and you can find plenty of information on the multiple tiers approach from inference providers e.g. the new Inference Engineering book by Philip Kiely.
You are likely correct that the 1hr cached data probably mostly doesn’t live on GPU (although it will depend on capacity, they will keep it there as long as they can and then evict with an LRU policy). But I already said that in my last post.
You can send books to your kindle over USB, and I do that all the time for larger books that are above the size limit on the email system.
The big problem is that Amazon no longer allows you to download books from their site to your desktop, so you have no way to actually get a purchased book and send it to the kindle even over USB. However, if you buy non-DRM books from other book sellers you won't have this problem.
They block you from doing this if you're not logged in (as I discovered after wiping and rooting one to give to a friend recently).
As evidence, note that instructions for rooting them requires the device to be registered - this is because it won't be accessible over USB until you do so: https://kindlemodding.org/jailbreaking/WinterBreak/
> The big problem is that Amazon no longer allows you to download books from their site to your desktop
I've bought a number of books on Kindle that were explicitly marked as being sold without DRM. Does this mean I've lost access to any DRM-free downloads that I haven't already backed up?
If you bought them from Amazon, you won't be able to get them after the cutoff date directly to that Kindle via WiFi. You may not be able to get them in a format that old Kindles can read at all.
Download and back them up now. Or just pirate them if you need them later.
The entire Kindle store system will cease working on older Kindles after the cutoff. Still works as a reader, but expect to lose things like location sync across devices.
I don't buy from Amazon, I don't turn on WiFi on my Kindle because it eats battery life, I always travel with a laptop, and I only use it to read outdoors. So I really don't care. It's my beach book. At home, I'd rather read on my iPad.
Oh, and FWIW, you can install Tailscale to a jailbroken Kindle and Taildrop files to it over WiFi, if it can read the format (for the old ones being discussed, that's mobi or azw3).
Google Drive reneged on unlimited storage for Education accounts once they realized that universities also contain researchers who need to store huge amounts of data.
Not only did they cut unlimited, they went to insultingly low limits with not much warning after all their nice promises. Moderately large universities ended up with less space per student than the 15GB they give out to anyone for free. It was a pretty bad rug pull.
Massive fraud from abroad didn't help there either. A favorite backup spot for terabytes of pirated media, complete with guides on which schools had good @edu addresses for it.
Hadn't even considered your obvious point, a good one!
Something similar is happening with GitHub Copilot too. It's impossible to know what a "request" is and some change in the last couple of months has seen my request usage go up for the same style of work. Toss in the bizarre and impossible to understand rate limiting that occurs with regular usage and it's pretty obvious that these companies are struggle to scale.
> A request is any interaction where you ask Copilot to do something for you—whether it's generating code, answering a question, or helping you through an extension. Each time you send a prompt in a chat window or trigger a response from Copilot, you're making a request. For agentic features, only the prompts you send count as premium requests; actions Copilot takes autonomously to complete your task, such as tool calls, do not. For example, using /plan in Copilot CLI counts as one premium request, and any follow-up prompt you send counts as another.
This clearly isn't true for agentic mode though. This document is extremely misleading. VSCode has the `chat.agent.maxRequests` option which lets you define how many requests an agent can use before it asks if you want to continue iterating, and the default is not one. A long running session (say, implementing an openspec proposal) can easily eat through dozens of requests. I have a prompt that I use for security scanning and with a single input/request (`/prompt`) it will use anywhere between 17 and 25 premium requests without any user input.
Do you have any evidence to support your claims? I keep a pretty close eye on my usage and have never seen it deviate from "1x/3x requests per time I hit enter". Is there a reproducible scenario I can try that will charge multiple requests for a single prompt?
I'm finding the oppostire with copilot. A request is a prompt, with some caveats around whats generating the prompt. I am quite happily working with opus 4.6 at 3x cost and about 1/3 oor the month in I'm stting at ~25% usage of a pro+ subscription. I find it quite easy to track my usage and rate of usage.
The overall context windows are smaller with copilot I believe, but it dfoesnt appear to be hurting my work.
I'm using it for approx 4 hours a day most days. Generally one shotting fun ideas I thoroughly plan out in planning mode first, and I have my own verison of the idea->plan->analyse-> document implementation phases -> implement via agent loop. simulations, games, stuff-im-curious about and resurrecting old projects that never really got off the ground.
How can you hope for anything better if you consider it an us versus them situation? When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
> When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
The response is "we don't believe you" because their actions show that they are hellbent on accelerating inequality using AI and they have offered absolutely no concrete plan or halfway convincing explanation of how, if their own predictions of AI's future capabilities are correct, we're supposed to go from here and now to a future that isn't extremely dark for the vast majority of humans on Earth (to the extent that said humans continue to exist).
The work they have done in this direction so far is not serious, so it's not taken seriously. They obviously care much more about enriching themselves than slowing or reversing current trends.
If they want to be taken seriously, maybe they should start acting like they're serious about anything besides their own wealth and power. And I do mean acting---they need to show us through their actions that they are serious.
Seriously. They can say they want to share their gains all they want, but I don't see them spending any lobbying money on things like universal income (and if Altman can afford to lobby for age verification laws he can certainly afford to lobby for things that actually benefit society). The reality is they don't lobby for anything that would take wealth away from them, and any redistribution of wealth (such as a s 75% tax rate) would by definition take wealth away from them.
You can, but then what? Do you judge what they say as if their perspective is the same as yours, and then conclude from that context that what they suggest could only come from an evil person. That seems to be what a lot of people do. What if they actually think what they are suggesting is the best thing for the world? How can you tell what is in their minds?
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
The idea that we cannot possibly use people's actions to judge them is ridiculous. Musk thinks that the world would be a better place if the races were separated and if all charitable giving was ended. I think that's monstrous.
The problem is that people have a million stories to explain the observed actions, most of those stories are bullshit, and people repeating them know fuck all about the decision-space in which these actions were chosen and taken.
This is a accidentally good example, we don't know what motivated him, while your ridiculous reason is unsound because it would be also a bad thing to do if he were clearing a wasps nest on someone else's property in the middle of the night.
I suspect that they are not a bad person but someone radicalised by the media they consume.
Firebombing someone's house is a bad thing to do. It doesn't mean they are necessarily a bad person. Anger and confusion can make good people do bad things.
I don't care if Altman is secretly a good person. I care very deeply that he is taking actions to harm the world in grievous ways and is not doing any visible thing to mitigate the extreme damage he will do.
"Altman is secretly a good guy" doesn't pay people's mortgages.
I doubt it nets positive or even cancels out the damage, but if we're taking a fuller picture, then we shouldn't also assume Altman / other AI company CEOs are "taking actions to harm the world in grievous ways" for shits and giggles, or for large payday. Despite what skimming HN would make one believe, AI tools are actually useful in science, technology, and all kinds of productive work.
So the silver lining is this - they're not risking to burn the world down for porn or bitcoin, but for general improvement in everything across the board, that happens to have an unfortunate side effect of destroying value of labor.
I don't think that Altman is a Dr. Evil level villain who just wants to hurt people. I instead think that he does not care about the damage he causes on his path to personal wealth and glory and I think that this is precisely as terrifying. I'm sure that the machines made of my corpse would be used for productive purposes too.
Altman probably won't torture my cats to death. What a guy.
>How can you hope for anything better if you consider it an us versus them situation?
Because it IS an us vs them situation.
They're awfully good at turning it into an us vs us situation whether it's blaming our parents' (boomers), blaming immigrants, blaming muslims or (their favorite), blaming the unstoppable forward march of technological progress (e.g. AI).
The media organizations they own are constantly telling these stories because it protects them.
>The Government could legislate that any increase in profits that are attributable to the use of AI are taxed
Nothing a billionaire loves more than misdirection and a good scapegoat. This is why Bill Gates made the exact suggestion you just did.
When THEY are the problem they love a bit of misdirection, especially when the "problem" is a genie that cant be put back in its bottle.
They're terrified that we might latch on to the solutions that actually work (i.e. tax them to within an inch of their life) and drive a populist politician to power which might actually enact them.
Thats coz my statement wasnt intended to be scientific proof of anything it was an explanation as to the function of the propaganda that got recycled through you and the intent behind it.
The billionaires could start to earn trust by lobbying for laws and programs that help the poor and displaced. Put money in to retraining programs to help people who lose their jobs. So far they seem to be doing the opposite, CEOs are publicly declaring ‘fuck you, got mine’ and leaving it at that.
Nick Hanauer has lobbied for higher minimum wages.
Michael Bloomberg has lobbied for healthcare.
Pierre Omidyar has spent about a billion on economic advancement non-profits
Gates Foundation - Bunch of stuff.
Warren Buffet - Too much to count
George Soros - For all the antisemitism, the kernel of truth in the lie is that he spends a lot of money trying to make the world better.
Chuck Feeny gave away $8B I'm sure some of it went to lobbying for better policies
A large number Advocate for a Universal Basic Income.
More advocate for things that they clearly think are good things for the world, even if you, personally do not.
Jack Dorsey, Reid Hoffman, hell even Elon Musk (he may be wrong about everything, but he's openly advocating for what he believes is good)
Sam Altman has done WorldCoin and is heavily invested in Nuclear Fusion. You can criticise the effectiveness or even the desirability of the projects, but they are definitely efforts that if worked as claimed would be beneficial.
Many billionaires spend money on non-profits to push for change, often they do not put their name on it because it makes them a target for attack, or simply that by openly advocating for something the lack of trust causes people to assume whatever they suggest has the opposite intention.
I'm not arguing that they are doing the right thing. I'm arguing that for the most part they are advocating for and investing in what they believe to be the right thing. Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
>Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
People like Elon literally are the enemy. He used his wealth to literally change our government in his favor. The idea that we need to go and have polite discussions to maybe change his mind, while he gets to stomp all over us (his DOGE efforts literally resulted in people dying). If a dialog with them was going to work it would have happened a long time ago, but the more we learn about these people the more obvious it is that they believe themselves to be smarter and better than the rest of us. They aren't going to listen to others, and pretending that they will seems like deflecting and giving up in advance. Our best hope is that people can get enough power to regulate billionaires out of existence before a revolution does it instead.
Please consider your biases. Musk could not have “changed” the government if the DNC didn’t hand it to Trump on a platter. Republicans took over because serious people had had enough with the DNC’s full-throated embrace of two things: race-based selection (with the unpopular Harris’s undemocratic coronation as the flagship example), and the relentless focus on trans ideology (to the point anyone not endorsing the fullest embrace of that idea has been declared equivalent to the worst racist). Without that, Democrats would have remained a powerful and relevant party and Musk would have gotten nothing he wanted.
Here's an idea for how to do that: treat frontier AI as a sort of 'common carrier'. The only business that frontier AI labs are allowed to conduct is selling raw tokens - no UI. Thus, 'claude code' would have to come from some other company. This would segment the AI industry, and, maybe, prevent a single entity (or small number of entities) from capturing all value.
Sounds promising honestly. One of the scariest parts of the big AI labs is all of the exclusive training data they get through their UIs. (It’s unclear whether distillation is a feasible way to close the gap).
If there were another party involved, that would (hopefully) diversify power that (potentially) comes with those streams of data.
It’s a bit ironic that the USA has mostly abandoned interoperability after being one of the pioneers with the American manufacturing method. [0]
If I had the answer to that I would probably be a politician instead of a systems eng, but off the top of the mind build out a parallel economies at the state level where people in the US actually live, ensuring QoL standards, then gradually renegotiate up back to the Federal level. It would require, united..states eventually, but the general thrust is to shed corporate capture so that people see their government actually benefiting and providing them with tangible life improvements in real time.
This is interesting to see since on another HN post everyone is bemoaning how expensive it’s getting to use frontier models because Anthropic is massively throttling Pro Max Claude plans. That’s certainly not going to become more accessible to us normal folk through taxation.
AI is actually a mass decrease in inequality, as much as the Gutenberg printing press was. It takes something that used to be the foremost example of pure bourgeois and intellectual privilege - the culture contained within millions of books and other instances of human creativity - and provides it to everyone for the cost of a few thousand bucks in hardware and a few watts of electricity.
I can't think of any period in time where it was so easy to go into business yourself and to generally have access to the same "means of production" as the biggest companies have.
If you want to use LLMs, you can either use cloud resources at what I think are really reasonable per-token prices compared to the value, or to set up your own server with an open-weights model at a comparable level of quality (though generally significantly slower tokens/s). In any case, you absolutely don't have to pay OpenAI/Anthropic/Google if you don't want to.
I'm well aware of this: I bought a pretty beefy (consumer grade beefy) GPU machine and run all sorts of open weight models. I do think there is potential.
But are you expecting 360m Americans to start their own businesses? That is a solution that doesn't scale. Consumer grade GPUs aren't going to scale all that much either, and the cost of the models are going up rather than down as vendors start seeking profits. We already see the memory and storage markets exploding in cost due to the rise in demand as well.
Also: A handful more of already well-off people going into business for themselves is not going to move the needle on inequality. When people say "It's never been a better time to start your own business" they still mean "the people who already have their needs met and have the capital to live off for a while while their business becomes viable: In other words, the people who have always started businesses: Already-Rich people.
It's never been a worse time for the poor or middle class to think about starting their own business. Prices on everything are rising, it's getting to be a struggle for even the middle class to continue to afford their homes. Healthcare is even more fraught than ever before, and if you're lucky enough to have a decent plan from your employer, aint no way you're going to give it up to go start a business.
> But are you expecting 360m Americans to start their own businesses?
I do not. I grew up on post-scarcity utopias like Star Trek, coupled with social capitalism, and believe that when there is a market need, people with the interest to tackle it will do so, even in the face of personal financial risk, but I absolutely don't think that it should be the default for everyone. Where there's no strong economic benefit for others to work, I would hope that we could offer everyone UBI, so that a comfortable basic level of life is available for everyone, without having to invent bullshit jobs that aren't needed.
I know I sound naive, but I truly believe that we can move into a future where human value is decoupled from their job, without going into communism.
The answer to that question was the US before the 1970s when manufacturing was still onshored. So many joe shmoes literally started companies in this era taking some garage creation and manufacturing it at scale at a local plant.
Now that all takes place in China. With layers of middle men who collect arbitrage between you and the Chinese manufacturers they connect to you. With tariffs. Weeks of international shipping. Enough volume of orders to justify international shipping at all. Enough production capacity ordered to even be worth while making your thing versus larger orders from around the world all being made in china.
>
AI is actually a mass decrease in inequality, as much as the Gutenberg printing press was. It takes something that used to be the foremost example of pure bourgeois and intellectual privilege - the culture contained within millions of books and other instances of human creativity[.]
I would rather claim that this is a proper description of shadow libraries [1].
Because success is individual, inequality is statistical.
It ia true that AI gives ordinary people a lot more chance to be successful.
But do not forget that success depends on lots of factors that are not in one’s control: knowing the right people, time being right for what you are doing, and lots of others. So while the mechanics of success is a lot different to lottery, it does not work much differently: 1 in 1M attempts are successful.
Yes, AI gives everyone more lottery tickets, but it gives rich people a lot more tickets.
Swartz died in 2002, decades before LLMs. It is distasteful to put words in the mouths of the dead by invoking him here.
Even local AI concentrates power in the hands of a few, the few who can afford the hardware to run it, and the few who have the luxury of enough time and energy to devote to engaging with the intricate, technical rabbit hole of local models.
It is lauding his accomplishments, yes. Why bring him up in specific if there is no relation intended? There are many broad shouldered giants in this space.
If they were being honest they would ask explicitly for permission instead of advertising opt-out. Now you might ask: who will explicitly give Microsoft permission to train on their private works? No one will -- and that's the point: this is a form of theft.
And how many people who use git on github go to the website? I only do when my token has expired and I need to grab a new one to push again. Which is every 90 days. Github.com is mostly invisible infrastructure to me.
This problem is solved by not having a token. Github and PyPI both support OIDC based workflows. Grant only the publish job access to OIDC endpoint, then the Trivy job has nothing it can steal.
You should be using build artifacts, not relying on `uv run` to install packages on the fly. Besides the massive security risk, it also means that you're dependent on a bunch of external infrastructure every time you launch. PyPI going down should not bring down your systems.
This is the right answer. Unfortunately, this is very rarely practiced.
More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.
There are so many advantages to deployable artifacts, including audibility and fast roll-back. Also you can block so many risky endpoints from your compute outbound networks, which means even if you are compromised, it doesn't do the attacker any good if their C&C is not allow listed.
The Purpose of a System is What it Does[1].
Whether malicious or not, the system does what it does. If people wanted it to do something else they would change the system. The reality is that when corporations make mistakes that benefit them those mistakes rarely get fixed without some sort of public outcry, turning the "mistake" into a "feature".
1. https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
reply