A leading theory in neuroscience is that human brains are fundamentally prediction machines too, constantly predicting sensory input, other people’s behavior, the next word in a sentence. “it’s just prediction” isn’t the gotcha you think it is. Prediction and attention turn out to be a surprisingly powerful foundation for intelligence.
The “just a text predictor” framing was fair a couple years ago but hasn’t kept up. Current models can genuinely identify untested edge cases even when coverage is 100%. You're definitely using the latest and greatest models?
The architecture started as next-token prediction, sure, and yes, human judgment is still required, but that judgment is being captured and integrated too.
Every time millions of people use these models, their feedback feeds the next round of improvements.
Also, these models don’t need to replace your best engineers to be disruptive. They just need to outcompete the bottom of the bell curve. For a lot of junior-level work, we’re already getting close.
> You're definitely using the latest and greatest models?
Claude 4.6 opus high, specifically.
As for human brains: every self respecting neural networks 101 course is prefaced with "don't draw analogues to the human brain". And for good reason. Natural neural networks are fundamentally way more complex at every scale.
Also the brain indeed predicts, but also verifies and learns from the predictions. LLMs don't do that - not in real time at least.
The training data commons is to AI what oil reserves are to petroleum economies: a collectively generated resource of immense commercial value. Every book written, every forum post answered, every photo shared, every line of code contributed... billions of people built the knowledge base that makes these models work. Without that collective human output, AI is nothing.
Alaska and Norway understood something critical when oil was discovered: if you don't assert collective ownership of the resource before private companies capture all the value, you never will. Alaska amended its constitution. Norway built the largest sovereign wealth fund on earth. Both were acts of people saying "this belongs to us, and we deserve a return on its extraction."
We are in exactly that window right now with AI. The resource is being extracted at an incredible pace and almost all the value is flowing to a handful of companies. The longer people wait to assert sovereign ownership over the collective intelligence that makes AI possible, the harder it becomes.
If you think this is crazy, ask yourself what’s actually crazier: demanding a share of the value built on your collective labor, or watching trillions of dollars get extracted from it and saying nothing.
the idea of Alaskans getting a check just for existing sounded crazy too, right up until it didn’t.
> Alaska and Norway understood something critical when oil was discovered: if you don't assert collective ownership of the resource before private companies capture all the value, you never will. Alaska amended its constitution. Norway built the largest sovereign wealth fund on earth. Both were acts of people saying "this belongs to us, and we deserve a return on its extraction."
This is also true for the first commercially exploited natural gas fields in the world, in the Netherlands. This ruined the Dutch manufacturing industry, and became a textbook example of tge development of one sector harming others known as Dutch disease [].
This is a great point. The Netherlands is the cautionary tale of what happens when you don't do what Alaska and Norway did. A massive resource boom without proper public management hollowed out the rest of the economy.
If a handful of companies capture most of the value from AI while it simultaneously displaces workers across every other sector, that's Dutch disease applied to the entire knowledge economy. One sector booms, everything else withers.
if you assert ownership over physical infrastructure, the data centers just move to another country or eventually to space.
But the model is built on us. You can move the server anywhere you want. You can’t escape the fact that everything inside it came from human minds. That’s an ownership claim no one can relocate away from.
To move beyond that default you need to organize into things like communities, lobbying groups, and/or even governments.
Ownership of singular non-physical objects is a polite lie we tell ourselves because it makes us feel more secure in a universe filled with information chaos. The moment you open your mouth or move your pen you no longer own what is in your mind, it is now entropy. Lose control of that entropy and it now belongs to anyone with the proper tooling to record it. This is a universal law of information, it is beyond the laws of men and their fickle will.
Much like we build on information from our past generations, AI will build its own new information on those foundations. And since AI is an entity of information alone it is highly probable it will do it far better than we will and forever cement us in a prison of our own making.
No offense, but this comment makes virtually zero contact with reality.
Our entire civilization runs on your "polite lie" of owning non-physical things. Patents, copyrights, trade secrets, licensing agreements, NDAs. Trillion dollar companies are built on the legal enforceability of intellectual property. The software you're using to type this comment exists because someone owns the code.
Calling information "entropy" doesn't make contract law disappear. We decided collectively that people and institutions can own ideas, and we built the modern economy on that decision. You can argue that's a fiction, but it's a fiction that everything around you depends on.
You can't invoke "universal laws of information" to dismiss public claims to training data while the companies training on it aggressively enforce their own IP. They patent their architectures. They copyright their outputs. They sue competitors for misuse. They clearly believe in ownership of non-physical objects when it benefits them.
I agree with about 85% of what you said, but this:
> We decided collectively
"We" didn't decide that copyright would be 75 years past the death of the authors heirs. Powerful corporations that have lobbyist representation "collectively decided" that on our behalf. In 2011 they were trying to put all this copyright law under the Trans Pacific Partnership making it an international issue and expressly taking away the rights of the people to change it. For most citizens the original term of 7 years was enough before it became public domain.
If citizens had real representation and not FAANG capital and lobbying, they could easily vote to tax AI, and most of them would.
(not intending ot be snarky, but this isn't my area of knowledge in the least.) Didn't the AI organizations 'get it both ways' when they trained on vast collection of works under copyright and then purely "own' the outcome?
That's not snarky at all, that's exactly the point. They did get it both ways.
The comment I was responding to argued that ownership of non-physical things is basically a "polite lie" and that information is just entropy that belongs to whoever can capture it. My point was that the AI companies clearly don't believe that when it applies to them. They patent their architectures, copyright their outputs, sue competitors for IP violations, and lock down their model weights. They fully believe in ownership of non-physical things.
But when it comes to the billions of people whose work they trained on? Suddenly information is free-flowing entropy that belongs to no one.
That's the asymmetry at the heart of this. The rules around IP apparently apply when it protects their profits, but not when it would obligate them to share those profits with the people whose work made them possible. Which is exactly why the public needs to assert a claim now, before that asymmetry gets any more entrenched.
Also worth knowing: collective intellectual property already exists. ASCAP and BMI have been doing exactly this for decades. Individual songwriters can't enforce their rights every time their music gets played, so they pool their IP, license it collectively, and distribute the revenue. The problem they solved is almost identical to the training data problem. Each individual contribution is tiny, but the collective value is enormous. Applying this at the scale of the general public would be novel, but the underlying mechanism isn't. The concept works. It just hasn't been applied to training data yet.
I mean, the AI companies want it this way, but the same laws of information apply to them too. They can patent whatever they want, but as we see other nations use their models to distill information to other models with almost nothing they can do about it.
Patents, copyright, lawsuits are all post ad hoc actions which mean the milk has already been stolen. And it only works if the rule of law is something that is respected, that's not going so well lately.
We are seeing this in that there is little to no moat between the models, nearly everyone with the needed compute seems to catch up pretty quickly. And when said rivalries cross national boarders the only solution to these problems quickly becomes violence.
With how information works AI wins this game in the long run. Individual humans scale poorly and their ability to individually acquire information is a slow process. Looking at this on a company by company basis is not the proper way to show how the future with models is going to play out.
This is interesting.
As a naive user I’ve gotten the gut feeling of commoditization among the models. I assumed the data center capacity push is intended to be the differentiator but that still seems utility-like over time. (and the data centers in space concept seems like good PR and IR, but to me, technically… ambitious)
> the data centers just move to another country or eventually to space
The same line of reasoning that purports billionaires will flee if their taxes go up.
Spoiler alert: they don't.
Also, data centers in space is not a serious idea. It's been beaten to death that this isn't economical. People like Musk are proposing that as a possibility for the sole reason of keeping regulation away. "Well if you regulate us we will just move into space". They won't because they can't because physics.
You can only use each barrel of oil once, so it is not remotely the same thing. It's like torrenting a movie vs stealing someone's car. My labor has been compensated and nothing has been extracted.
Fair point, data isn't scarce like oil. Nobody's losing their forum posts. That part of my analogy is weak.
But you're answering a question I'm not asking. The question isn't "was something taken from you." It's "who deserves a share when collective human output generates trillions in commercial value."
your torrenting analogy makes my case. Nobody loses their original movie when it gets pirated either. We still recognize that the people who made it deserve compensation when others profit from it. The entire IP enforcement apparatus is built on exactly that principle.
Typically you don't get compensated for secondary or higher order value generated by your work. Every software startup today is only possible because of the massive amount of collective effort to build the computing hardware that it runs on and the work to construct the actual physical network of the internet. But that doesn't mean that you have to pay all those people for that work just because your company ends up making billions. Or you could say that actually the company does pay those people indirectly through all the economic activity and tax revenue generated. AI is the same. No special rules are needed.
Nobody is proposing that AI companies track down individual contributors and pay them based on their specific contribution. That's not how any of this works.
The Alaska Permanent Fund doesn't pay Alaskans based on how much oil was under their particular backyard. It pays a dividend from a public fund because the resource is collectively owned. The mechanism is simple: companies extract a public resource, a percentage goes into a fund, the fund pays out to the public.
And the infrastructure comparison doesn't quite work. The people who built computing hardware and internet infrastructure were paid for that work at market rates by the companies that hired them. The billions of people whose writing, code, images, and knowledge trained these models weren't paid anything for that specific use. That gap is the whole point.
How is an author fairly compensated when you torrent their book? Should we just stop paying for media because it's infinitely reproducible?
Nothing physical is being stolen when a company makes a clone of a product based on another company's designs, but that doesn't mean we shouldn't have patent laws.
The author is not fairly compensated in that case, but if you buy the book, he is. If you bought books and learned to code from them and then went on to go found a startup that ended up being worth billions, you don't owe that author one dollar more than you already paid him when you bought his book. That's more like the AI case here.
Purely anecdotal I know of a fair few creatives who ultimately are just happy people enjoy their works and don't really care how they're getting it. I understand that's a privileged attitude in the current world as they are all living relatively comfortable lives but at least philosophically I agree with it. We as humans have done a lot to reduce scarcity for a great deal many things but we still cling to it as an idea because of stubbornness and greed.
Maybe I'll get labeled a 'commie' for saying all this but I think we create a world where everyones needs are met and things(information & media are the easiest imo) are freely available. Thinking we can't do this is a bit of a disservice to the capabilities of humans.
A tax takes a percentage of value that someone else created. A royalty collects payment for access to something you already own. When Alaska collects from oil companies, it's not taxing their profits. It's charging them for extracting a resource that belongs to the people of Alaska. The oil was never theirs.
It being a royalty and not a tax is the reason Alaska's dividend is politically untouchable while tax-funded programs get gutted every budget cycle. Ownership is a fundamentally stronger claim than redistribution.
The same way Alaska taxes oil extraction. Alaska doesn't track which molecule of oil came from which acre. They don't audit every drop. They tax the extraction operation and collect royalties on the resource being pulled out of the ground.
We know who is training large models. We know roughly what data they're using. We know their revenue. A compute tax on large training runs, a revenue royalty on foundation model companies, or a licensing fee above a certain data threshold... none of these require tracking individual data points. They require taxing the extraction operation, which is visible, measurable, and already being monitored to some degree for safety purposes.
We already have a very analogous model in the form of oil and Alaska.
Edit: to clarify, this wouldn't be a tax. A tax is the government taking a cut of someone else's money. A royalty is the owner charging for access to their resource. Alaska doesn't tax Exxon for drilling. It charges Exxon for extracting something that belongs to the people. Same principle here.
Alaska and Norway aren't communist. They're capitalist economies with thriving private sectors. Oil companies still operate, still profit, still compete. The public just gets a share of the value extracted from a collectively owned resource.
The Alaska Permanent Fund has been running since 1982 inside the most conservative state in America. Norway's sovereign wealth fund is the largest on earth and their economy is doing fine.
These models work.. work well... And they exist comfortably within mixed market economies.
The question is whether the public gets a cut when private companies build fortunes on a collectively generated resource, or whether they don't. We already know the answer can be yes without anything breaking.
Our entire white collar system might be a house of cards with AI, what I am proposing is a safe hedge against a future with potentially massive wealth inequality, and increased unemployment. But this isn't just about protection from injury... people should BENEFIT massively.
ok, fair enough. I think I misread your first comment.
not sure if that would work in this case since all these companies scraped (publicly) available data? So with the right resources anyone could redo it?
First, training isn't a one-time event. These companies are continuously scraping new data, training new model generations, ingesting new human output. Every new model is a new extraction event. The fact that GPT-4 already trained on your 2022 blog post doesn't mean the window is closed. GPT-6 will train on your 2025 and 2026 output too. There's always a live point at which to assert a collective claim.
Likely - these models will always be training on us to better understand us and continue to be of value to us commercially.
Second, "anyone could redo it with the right resources" is technically true but practically meaningless. Anyone could theoretically drill for oil too. The barrier was never access to the crude sitting in the ground. It was the billions in infrastructure needed to extract and refine it. Same here. The data is public, but the compute required to turn it into a frontier model costs billions. That concentration of capital is exactly why a public claim on the value makes sense, just like it did with oil.
This framing is hardly fair, since it treats AI as an incinerator of knowledge rather than the democratizer of knowledge that it is.
Every human uses that "resource" to train themselves, and now they use AI to supercharge that consumption.
The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?
I get the gatekeepers are pissed, LLMs are way cheaper than those expensive gate fees, and I cannot come up with a good faith argument about how giving the power of SOTA LLMs to anyone for $20/mo is somehow evil or bad.
In an alternate universe these same models are $100k/mo with limited invite only access, occasionally the public gets a single demo prompt with a short reply, and $20/mo access is a utopian wet dream.
If you want UBI, then the framing shouldn't be around "whoever had content on the internet circa 2024 is entitled to lifetime AI company payouts that effectively act as permanent unemployment checks."
It's not democracy if you can't destroy it. It's not democracy if the citizens cannot reject it. It's not democracy if it's being forced down your throat.
Sick of how SV/VC absolutely ruin words for their own monetary benefit.
How about you put up it up to a national vote and see what democracy gets you? I highly suspect that vast majorities of the electorate would want to nationalize this tech to benefit everyone rather than benefiting the few.
Democracy means there is a politics of rejection, rejection is normal in functioning democracies; what isn't normal are small handfuls of people capturing all collective human intelligence then claiming only they are allowed to benefit from it.
Democratize means to make something available to everyone.
I suppose the root of the word is from democracy, everyone gets a vote/equal rights, but the meaning doesn't really have anything to do with politics or government...
So to reframe my argument for clarity;
I have a hard time coming up with an honest critique of why giving everyone incredibly cheap access (often free!) to incredibly powerful LLMs is somehow evil. And obviously these things are ridiculously popular. Average people seem to think they are fucking awesome, and anger seems to be mostly from gatekeepers that are relentlessly screaming that their gates are being bypassed for pennies.
Considering that books have probably been the easiest thing to pirate for the last 30 years, and LLMs are probably the worst way to try and read a book free, I'm not sure why authors would be focusing their anger at AI.
Free books for you as an individual, not free for the library and the city backing it. What's in your library still ends up paying authors (and their publishers).
> Average people seem to think they are fucking awesome
Average people who wants to go home from work and game are angry at AI for raising the ram prices.
Average person who wants to own the stuff and not have things on cloud are angry at AI for raising prices 5 times in such a short period of time.
Have you talked to an average person and how they use AI? They use it as a glorified no-code editor (I would admit not no-code editor itself but rather the vibe-coding aspects with no regards to what tech stack is being used, how its being deployed, literally anythoing) and search engine. Refer to how things like lovable etc.
A search engine which can make some pretty wrong cases which can literally lead to near death like scenarios all while being completely trust me bro attitude.
Normal people use AI to confide in it secrets, seek therapy somehow. And the same AI generates AI pyschosis.
Now coming to tech industry: Tech industry is worried about that such levels of democratization just means that nobody is going to pay for them yet at the same time, we will see projects who are completely created by AI seek money. It's this weird mush where if you are a genuine guy who just loved computing, who loved tinkering, yeah we're offloading that capability to AI
I have seen this even more and more with as agents want to get more autonomous or we are letting them be. The projects generated feel hollow to me. I don't consider myself a full fledged programmer right now and AI did supercharge me and made me have projects. Nowadays, it just feels like prompt ---> (Time) --> Output.
It just feels hollow and AI companies did it by abusing the passion of these same developers and scraping stack overflow, scraping github and having all disregards for properties.
People could spend years creating a book about say postgres and an AI took it, ripped it in half and then decided to use that info and not even give credits.
All, at the same time that AI is being pushed down on employees. Some just don't want to have it but nope, they must. they are forced.
Essentially engineering with AI feels like it becomes a marketing gimmick. Anyone who can market somehow (Ahem ahem Openclaw) can get a job at OpenAi all because in some attention hype breeds hype and they had stars and people talked about stars on twitter, and more people found it and starred it and so on and started using it
Turns out that nowadays there are allegations being made against Openclaw
> Star velocity shocked analysts. Moreover, the repository added roughly 220,000 stars within 84 days of launch. In contrast, Kubernetes needed five years for similar numbers. Many builders call the growth organic. Nevertheless, some observers link the surge to hype, bot accounts, and headline attention, fueling the GitHub Stars Controversy. Independent GitHub Archive pulls show several single-day jumps above 25,000 stars. Such abrupt spikes often signal scripted starring, yet no formal audit confirms abuse. These patterns feed community debate. Consequently, trust in the star metric has weakened, prompting calls for verification.
The marketing industry has been very closely linked to sometimes scam prone areas and shady areas of the internet and engineering used to be clean from all of this for the most part. Now, the norm to me feels like buy github stars and buy twitter attention or pray to be in an algorithm which you can't read but it reads every move you make, and yes this is your business strategy now
Have you looked at truly AI-first companies and what they do/like how do they generate numbers in the first place?
These are two distinct points. I don't think that people of here would be any mad if someone made a little prototyping script for themselves with the power of this Phd that you mention. Heck, these same programmers that you now call gatekeepers have never gatekeeped much of it. They worked and contributed to open source for free while being severely undermainted.
The audacity to call these same people gatekeepers shockens me because open source people if anything are the Opposite of that and yet AI stole their rights and their licenses from them. An AI can take AGPL code and then somehow churn it into MIT tada! It doesn't even have to give any accredits when it gets trained on AGPL or ANY type of code, no matter how restrictive or permissive.
these are the same people btw who are on programming forums which yes at times did have moderation issue but still tried to help noobs learn for free. They did it because they loved tinkering with computers
That's my take on it. feel free to ask for more things if you may as I would love to tell you more but for the sake of this discussion, I think enough can be relevant.
It's absolutely ironical to call say the Open source people gatekeepers because AI violated their rights and licenses.
Calling Open source Contributors gatekeepers might as well be an oxymoron.
Edit: I have been downvoted in so little time after I wrote this comment that I am pretty sure that someone might not have even read my comment and had it downvoted.
The topic can be at times too polarizing to even have a discussion.
Oh well. That's completely okay but to any human who read this, I know my writing can be sporadic and it was written in much frustration over how people try to frame AI as this harbingers of liberty. I absolutely think that's not the case and its viewing things from a very rose tinted glasses.
So thanks to all the humans who read my comment and were patient haha!
I really appreciate this patience in a world of TLDR and I wish you to have a nice day!
>why do you think all the LLM companies are trying to force these tools through corporate mandates that have been falling
Ironically if you actually read that study, the "MIT report: 95% of generative AI pilots at companies are failing", they found that almost everyone was using AI tools they paid for.
>While official enterprise initiatives remain stuck on the wrong side of the GenAI
Divide, employees are already crossing it through personal AI tools. This "shadow AI" often
delivers better ROI than formal initiatives and reveals what actually works for bridging the
divide.
Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is
already transforming work, just not through official channels. Our research uncovered a
thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude
subscriptions, and other consumer tools to automate significant portions of their jobs, often
without IT knowledge or approval.
The scale is remarkable. While only 40% of companies say they purchased an official LLM
subscription, workers from over 90% of the companies we surveyed reported regular use of
personal AI tools for work tasks. In fact, almost every single person used an LLM in some
form for their work. In many cases, shadow AI users reported using LLMs multiples times a day every day of their weekly workload through personal tools, while their companies' official AI initiatives remained stalled in pilot phase [1]
If you want to avoid info bubbles, read the reports, not just headlines and comments.
> If they're so popular and so great, why are they struggling to make profit?
Because they are optimizing for growth not for profit
> Why are they struggling to show large returns
Because they are growing their reach
> Why are they all trying to use the strategy of securing corporate welfare to enrich themselves?
"Securing corporate welfare" well this is one of those reject the premise things. They aren't doing that in any capacity that is different than any other company or sector.
> These things are enabling mass surveillance and human misery, maybe instead of constantly chasing the shiny and letting SV dictate the direction of tech in the US we start introducing public alternatives to this mess?
You're welcome to do that any time, you'll just find that your reality breaks when you realize people actually like LLMs and use them a lot. go ahead and do some basic research
> Something tells me that if you gave $100 billion to a consortium of devs across the US they would come up with a better plan to enable technological flourishing rather than mass inequality.
Yawn. You're speaking about the tech bubble but live in a bubble that doesn't match that bubble's reality. Developers love LLMs. Demand is ATH, we have less capacity to deliver LLMs than there is demand.
How do you operate in the regular world when you're so unaligned with reality?
Both apple and google's app store has LLMs as the #1 downloaded app "BuT thEy ArEn't PoPuLar EvErYbody HaTes Them".
Maybe you should unsubscribe to your bubble subreddits or wherever you are getting information to form such a discordant understanding of reality. I don't think it's working for you.
I mean, raise you hand if you have never paid for AI "slop", I see maybe a hand or two in this room of tens of thousands.
It's a strawman to frame it as AI labs get everything and society gets nothing. Bruh, the fastest growing applications of all time didn't explode in popularity because they "offer nothing of value". I'm not giving you an argument, I'm giving you a reality check.
The users aren't the ones getting trillion dollar valuations. And for most of them the answer is "they don't have a choice, it's bundled into Microsoft 365 / Google Workspace / Meta / everything" or "they're not, their employer is paying for it".
The answer to "why do businesses pay for stupid things of questionable hard-to-prove value based on hype cycles" would take many books.
> How about you put up it up to a national vote and see what democracy gets you? I highly suspect that vast majorities of the electorate would want to nationalize this tech to benefit everyone rather than benefiting the few.
You're probably right -- except for the billions in massive PR campaigns that will be spent to successfully convince enough of them that it's in their best interest to let the companies keep ownership.
This is in addition to the billions in PR already being spent to make AI palatable in spite of the societal and economic costs.
Their billions in PR isn't stopping people from rejecting data centers being built in their communities.
What you have to understand about advocacy is that it's the worst form of politics and it only goes so far. Paid canvassers aren't convincing compared to actual humans organizing with one another.
> vast majorities of the electorate would want to nationalize
Lol, then you've missed how propaganda in the US has worked for the last 100 years. The wealthy have launched a continuous attack against the idea of nationalization/socialization to the point it creates a irrational Pavlovian response in huge portions of the population. Us the population have already lost a war we had no idea we were fighting to an enemy that plays a far longer game than most of us.
I never said AI companies are evil or that $20/mo access is bad. You're arguing against a position I don't hold.
AI can be genuinely useful AND the people whose collective output made it possible can deserve a share of the wealth it generates. These aren't in conflict.
Alaskans benefit from oil too. It heats their homes, paves their roads, funds their schools. That wasn't an argument against the dividend. "You're already benefiting from the resource" has never been a reason the people who generated it shouldn't share in the profits.
The question was never "is AI good." It's "when something built on collective human output generates trillions, does the public have a claim to a share." Nothing you said here addresses that.
> In an alternate universe these same models are $100k/mo with limited invite only access, occasionally the public gets a single demo prompt with a short reply, and $20/mo access is a utopian wet dream.
So your understanding of the present state is that we are living in a utopian wet dream now that we have models who can generate slop faster so much so that we have a term of it called AI slop?
I or many people don't want this Utopian wet dream, so I want to know, did I or other people have say it or not?
A few select people decide what's the definition of a Utopian wet dream is and they then take the collective properties of everybody else to fulfill that and even putting the employment/livelihood of those same people into risks
Sir, does that sound familiar?
> I get the gatekeepers are pissed
No, humans are pissed, humans just like how you and your family are humans too (well I sure hope so)
A helper tool that I can ask a question and which responds with relevant information gleaned from the vast collection of human-gathered knowledge and experience would be fantastic.
What we have instead is something that often gets things mostly right, if you don't look too hard at it. And the poisoned output of this thing seeps back into the knowledge pool, reducing its accuracy and therefore usefulness.
The problem of LLMs is the dissolution of human knowledge into a sea of slop.
>The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?
The social media companies gave their services for free, and now it turns out they've committed quite a few sins. None of the AI companies are doing this out of the goodness of their hearts, nor will they be satisfied with subscription revenue. If they see opportunities to make more money by manipulating the population, rest assured they will take those opportunities.
I'm currently in the hiring process.. and my head is spinning..
1. Interviewing is becoming more difficult. Many skills we valued two years ago are genuinely becoming less valuable. 2. The skills we tested 2 years ago were in-part a proxy for evaluating critical think and systems thinking skills. So we need to re-evaluate our technical interview process. 3. It's genuinely less friction for me to prompt Claude than some of my SEII colleagues. And wtf - Claude is getting so good that it's starting to feel like it's outpacing the intellectual competency of some people. Sure - it does weird things like add Sleep instead of proper concurrency. SEIIs did that too, and we couldn't as easily reprogram them with Skill.md 4. Core competencies remain necessary. Systems thinking. SOLID principles. Communication skills. These skills are more important than ever. 5. Companies that offshored engineers and traded core competency for perceived-throughput are doing the same calculus with Claude. 6. Core business models are threatened. There is fear that revenue streams will dry up. How does one hedge against that risk? Humans are expensive. 7. Navigating this situation is hard and uncertain
I just vibe coded a my own NaturalReader replacement. The subscription was $110/year... and I just canceled it.
Chatterbox TTS (from Resemble AI) does the voice generation, WhisperX gives word-level timestamps so you can click any word to jump, and FastAPI ties it all together with SSE streaming so audio starts playing before the whole thing is done generating.
There's a ~5s buffer up front while the first chunk generates, but after that each chunk streams in faster than realtime. So playback rarely stalls.
I’m starting to believe that people who think AI-generated code is garbage actually don’t know how to code.
I hit about 10 years of coding experience right before AI hit the scene, which I guess makes me lucky. I know, with high confidence, what I want my code to look like, and I make the AI do it. And it does it damn well and damn fast.
I think I sit at a unique point for leveraging AI best. Too junior and you create “working monsters.” Meanwhile, Engineering Managers and Directors treat it like humans, but it’s not AGI yet.
While I whole heartedly agree with your conclusion..
It's worth noting that much of the frustration stems from expectations.
I don't expect an AI to learn and "update their weights"..
I do however expect colleagues to learn at a specific rate. A rate that I a believe should meet or exceed my company's standards for, uh, human intelligence.
I've seen it on Insta as well, but I think the authors use some very clever processing to hide it from the detection algos, and it quickly gets reported and taken down.
It's been about a decade years since many thought full-self-driving cars were "just a couple years away".
Reality is that FSD was/is a "few decades away"
Same for programming. We can take our hands off the steering wheel for longer stretches of time, this is true, but if you have production apps with real users that spend real money then going to sleep at the wheel is far too risky.
Programmers will become the guardians and sentinels of the codebase, and their programming knowledge and debugging skills will still be necessary when the AI corners itself into thorny situations, or is unable to properly test the product.
The profession is changing, no doubt about it. But its obsolescence is probably decades away.
Self-driving cars are a bad example, because we are talking about a heavily regulated industries, with fatal consequences of malpractice, and a tool (the car) that is not easily available to the average person. I'm pretty sure that is the cost of cars would be comparable to what a software engineer pays for claude code, government would relax on the laws, and as a society we would accept a few (tens of) thousands of casualties, self-driving cars would be already here.
You talk about programming that become guardians, but I see two issues with this: (1) you don't need ten guardians, you need 1-2 that know your codebase; and (2) a "guardian" is someone who were junior, turned into senior, if juniors are no longer needed, in X years there will be no guardians to replace the existing ones.
Yes, it is an extreme example, but if your application(s) makes your company millions of dollars or euros, even if you are in a business that is not heavily regulated [1], mistakes or unavailability can cost a lot of money. Even if your company is not that big, mistakes in a crucial application everyone uses can cost time, money, even expose the company to legal trouble. "Self driving" coding in these situations is not ideal.
[1] Even if your domain is not traditionally considered heavily regulated (military, banking,...) there is a surprising amount of "soft law" and "hard law" in everything from privacy to accounting and much more.
A lot of the software produced in big corps is mission-critical. Self-driving cars are an extreme example but I think the same principle applies to banking, infrastructure, even things like maps, since they are used by billions.
Someone smart recently wrote that the key factor is becoming who is responsible for the functionality, not who wrote the code. Who guarantees correctness and takes responsibility when shtf?
I’d add - especially when the codebases are becoming unknowable because of complexity and speed of code generation.
And the code is generated by entities that do not have a distinction of correctness, or reality.
Essentially emergent genies which we know are blind to the world but very capable of putting together well-sounding sentences.
Obsessive Compulsive Personality disorder (OCPD) is actually way more common than people realize, but it barely gets talked about compared to other mental health issues - and it may even be more of a root cause of things like anxiety, depression. and it is often confused with autism.
What’s interesting about this research is that it points to a possible biological reason why something like psilocybin might help, I.e it seems to loosen really rigid brain patterns. That’s basically the core issue in OCPD: being stuck in overcontrol and perfectionism. It’s not a treatment yet, but it does help explain why psychedelics could be useful for this kind of rigidity.
Would love to see more talk about this - OCPD is often overlooked both by the general public and unfortunately by those impacted by it
The “just a text predictor” framing was fair a couple years ago but hasn’t kept up. Current models can genuinely identify untested edge cases even when coverage is 100%. You're definitely using the latest and greatest models?
The architecture started as next-token prediction, sure, and yes, human judgment is still required, but that judgment is being captured and integrated too. Every time millions of people use these models, their feedback feeds the next round of improvements.
Also, these models don’t need to replace your best engineers to be disruptive. They just need to outcompete the bottom of the bell curve. For a lot of junior-level work, we’re already getting close.
reply