I recall an earlier exchange, posted to HN, between Wolfram and Knuth on the GPT-4 model [1].
Knuth was dismissive in that exchange, concluding "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."
I've noticed with the latest models, especially Opus 4.6, some of the resistance to these LLMs is relenting. Kudos for people being willing to change their opinion and update when new evidence comes to light.
I think that's what make the bayesian faction of statistics so appealing. Updating their prior belief based on new evidence is at the core of the scinetific method. Take that frequentists.
It does not seem fair to say that frequentists do not update their beliefs based on new evidence. This does not seem to accurately capture what the difference between Bayesians and frequentists (or anyone else) is.
I think this means preparing for BC to go out of sync with the US states we normally sync to (Washington, Oregon, California).
However, there is some hope I've heard expressed that this may push one or more of those states to make the switch as well. Unlikely, but hey, a lot can happen in eight months (as this year is already proving).
US states cannot, under current federal law, go permanent daylight time (they can go permanent standard time, though), and they can't unilaterally make the latter have the same effect as the former by simultaneously switching time zones because they can't switch timezones without approval of the federal Department of Transportation. I don't see the current federal government making special accommodation for California, Oregon, or Washington on, well, anything in the next eight months, so...
I actually agree on how unlikely it is but the wildcard is Trump. I mean, he is the kind of guy who just goes with his gut and he has already publicly supported getting rid of the time change, calling it a "50-50 issue" (on which to go with, standard or savings).
The worse case is he pushes a federal change to standard time, in which case I suspect BC would have to go along.
Anyone who has worked on a game knows it is a long, painful slog to the finish line. AI dev is promising the exact opposite: minimal prompts and the agent does all the slog.
Even if AI can whip up a quick demo or prototype for a game, it is the long-tail of tedious details that a passionate person has to hammer away on that separates what ships from what dies. I'm guessing most AI opportunists are looking for quick wins.
I still think it is only a matter of time before someone with the passion hammers an AI to get a game to market.
In the other corner we have the AAA publishers who are laying off devs and canceling games and talking as if AI is going to revolutionize their business… somehow.
Not to be argumentative since I broadly agree with your characterization (and the mass cancelling of games is concerning), but I think AI will revolutionize at least asset creation.
I worked on sports titles for a while and there was literally an army of contractors making uniforms, shoes, hairstyles, etc. I'm pretty convinced gen-AI will make that job obsolete.
I think the most interesting argument similar to yours centers around the problems of "social VR", that is, maybe people would like something like Horizon Worlds if the authoring tools weren't so bad. Part of the problem is that affordable XR headsets have a tiny amount of RAM and headsets with a moderate amount of RAM are crazy expensive and headsets with enough RAM just don't exist. Assuming you had something that could run generated worlds it would certainly be nice though if somebody could prompt them into existence.
I think you could make decent assets with AI but I don't know if the people who make video games today could. There's a certain kind of tastelesness which seems to infect people when they get infected with AI fever -- I think "gamers" are on edge for signs of this kind of thing because the AAA game makers give them off copiously no matter what they do.
Hitchhiker's Guide had a slightly deeper philosophical implication though, in that the premise is that powerful computers already existed to solve complex problems. Earth was created to pose powerful questions.
One thing I have been guilty of, even though I am an AI maximalist, is asking the question: "If AI is so good, why don't we see X". Where X might be (in the context of vibe coding) the next redis, nginx, sqlite, or even linux.
But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.
If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.
The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.
The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop
I often find myself wondering about these things in the context of star trek... like... could Geordi actually code? Could he actually fix things? Or did the computer do all the heavy lifting. They asked "the computer" to do SO MANY things that really parallel today's direction with "AI". Even Data would ask the computer to do gobs of simulations.
Is the value in knowing how to do an operation by hand, or is the value in knowing WHICH operation to do?
This cuts both ways. If you were an average programmer in love with FreePascal 20 years ago, you'd have to trudge in darkness, alone.
Now you can probably create a modern package manager (uv/cargo), a modern package repository (Artifactory, etc) and a lot of a modern ecosystem on top of the existing base, within a few years.
10 skilled and highly motivated programmers can probably try to do what Linus did in 1991 and they might be able to actually do it now all the way, while between 1998 and now we were basically bogged down in Windows/Linux/MacOS/Android/iOS.
> New languages and technology will be derivatives of existing tech.
This has always been true.
> There will be no React successor.
No one needs one, but you can have one by just asking the AI to write it if that's what we need.
> There will never be a browser that can run something other than JS.
Why not, just tell the AI to make it.
> And the reason for that is because in 20 years the new engineers will not know how to code anymore.
They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
> Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
> The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
You don't exactly need to use PLs to clarify an ambiguous requirement, you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
Indeed, like terms & conditions pages, which people always skip because they're written in a "legal language", using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
With that said, the previous paragraph will work if you're delegating to a competent coworker. It should work on "AGI" too if it exists. However, I don't think it will work reliably in present-day LLMs.
> You don't exactly need to use PLs to clarify an ambiguous requirement
I agree, I guess what I'm trying to say is that the only reason we've called constructed languages "programming languages" for so long is because they've primarily been used to write programs. But I don't think that means we'll be turning to unambiguous natural languages because what we've found from a UX standpoint it's actually better for constructed languages to be less like natural languages, than to be covert natural languages because it sets expectations appropriately.
> you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
We’ve tried that and it sucks. COBOL and descendants also never gained traction for the same reasons. In fact proximity to a natural language is not important to making a constructed languages good at what they're for. As you note, often the things you want to say in a constructed language are too awkward or verbose to say in natural language-ish languages.
> terms & conditions pages, which people always skip because they're written in a "legal language"
Legalese is not unambiguous though, otherwise we wouldn’t need courts -- cases could be decided with compilers.
> using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
When there is a cost per token then it become very important to say everything you need to in as few tokens as possible -- just because it's possible doesn't mean it's economical. This points at a mixture of natural language interspersed code and math and diagrams, so people will still need to read and write these things.
Moreover, we know that there's little you can do to prevent writing bugs entirely, so the more you have to say, the more changes you have to say wrong things (i.e. all else equal, higher LOC means more bugs).
Maybe the LLM can write a lower rate of bugs compared to human but it's not writing bug-free code, and the volume of code it writes is astronomical so the absolute number of bugs written is probably also enormous as well. Natural language has very low information density, that means more to say the same, more cost to store and transmit, more surface area to bug check and rot. We should prefer to write denser code in the future for these reasons. I don't think that means we'll be reading/writing 0 code.
That's an interesting possiblity to consider. Presumably the effect would also be compounded by the fact that there's a massive amount of training data for the incumbent languages and tools further handicapping new entrants.
However, there will be a large minority of developers who will eschew AI tools for a variety of reasons, and those folks will be the ones to build successors.
We have witnessed, over the past few years, an "AI fair use" Pearl Harbor sneak attack on intellectual property.
The lesson has been learned:
In effect, intellectual property used to train LLMs becomes anonymous common property. My code becomes your code with no acknowledgement of authorship or lineage, with no attribution or citation.
The social rewards (e.g., credit, respect) that often motivate open source work are undermined. The work is assimilated and resold by the AI companies, reducing the economic value of its authors.
The images, the video, the code, the prose, all of it stolen to be resold. The greatest theft of intellectual property in the history of Man.
The greatest theft of intellectual property in the history of Man.
Copyright was always supposed to be a bargain with authors for the ultimate benefit of the public domain. If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.
You can argue for compromise -- for peaceful, legal coexistence between Big Copyright and Big AI -- but that will just result in a few privileged corporations paywalling all of the purloined training data for their own benefit. Instead of arguing on behalf of legacy copyright interests, consider fighting for open models instead.
In a larger historical context, nothing all that special is happening either way. We pulled copyright law out of our asses a couple hundred years ago; it can just as easily go back where it came from.
>If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.
Going forward? Okay, sure. But people created all of the works they created with the understanding of the old system. If you want to change the deal, then creators need to know that first so they can decide if they still want to participate
Allowing everyone to create everything and spend that labor with the promise of copyright, and then pull the rug "oops this is just too important" is not fair to the people who put in that labor, especially when the people redefining the arrangement are getting 100% of the value and the creators got and will get nothing
There is one missing factor in your argument. The wealth transfer. The public was almost never the beneficiary of copyright and other IPs. Except perhaps its earliest phases where the copyright had a strict term limit, it was always the corporations who fought for it (Disney being the most infamous), using it to prevent the public from economically benefitting from their work almost forever.
And then people found a way to use the same copyright law to widely distribute their work without the fear of losing attribution or being exploited. Here comes along LLMs that abuse the 'fair use' argument to break attribution and monetize someone else's work. Which way does the money flow? To the corporations again.
IP when it suits them, fair-use when it benefits us. One splendid demonstration of this hypocrisy is how clawd and clawdbot were forced to rename (trademark law in this case). By twisting and reinterpreting laws in whatever way it suits them, these glorified marauders broke a trust mechanism that people relied on for openly sharing their work.
It incentivices ordinary people to hide their work from public. Don't assume that AI is going to solve that loss. The level of original thinking in LLMs is very suspect, despite the pompous and deceitful claims by its creators to the contrary. Meanwhile, the lack of knowledge sharing and cooperation on a global scale will throw civilizational growth rate back into the dark ages. Neither AI, nor corporations are yet anywhere near the creativity and original thinking as the world working together. Ultimately, LLMs serve only the continued one-way transfer of wealth in favor of an insatiably greedy minority, at the cost of losing the benefit of the internet (knowledge sharing) and an enormous damage to the environment - all of which actively harm the public.
Ultimately, LLMs serve only the continued one-way transfer of wealth in favor of an insatiably greedy minority
Including the ones I can run on my own PC at home? I couldn't do that before. Maybe I'm the greedy minority, but I'm stronger and (at least intellectually) wealthier than I was before any of this started happening.
Qwen 3.5, which dropped yesterday, is a genuine GPT 5-class model. Even the ones released by US labs such as OpenAI and Allen AI are legitimate popular resources in their own right. You seem to feel disempowered, while I feel the opposite.
Yes, even the ones you can run on your system. They're no different from proprietary OS and software you used to run on your system, whose design in which you had no say whatsoever. These 'free to run' models are hardly open source. You don't have the data that was used to train them. It's not just about the legality of those data. The dataset chosen may have extreme bias that you can never eliminate satisfactorily from a trained model.
As if that wasn't bad enough, these models cannot be trained on your regular home computer. But instead of striving to improve the energy efficiency of these models, those big corporations build and run massive gas guzzling data centers to train them. They ruin the quality of life for the neighbors through pollution, water depletion and electricity price rise. It also disproportionately affects the poor in the world by reducing supply of essential computing components like RAM (which are needed for medical devices, utility and manufacturing installations and every other aspect of modern life), and by aggravating the climate crisis, whose victims are the poorest.
They don't give you those models out of the goodness of their hearts. Those are just advertisements and trial pieces for their premium services. They also peddle the agenda of its creators. So yes, those models are empowering only in a very narrow sense without any foresight. They are still the money making engines for the rich that subject you to their benevolence, whims and fancies.
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
Eh, we already have a name for the concept of living by plausible-sounding works of fiction: religion.
Yet another post who misses (or chooses to overlook) my point: this stuff is running on my machine. "Seizing the means of production" means going into my back room and pulling a computer out of a rack.
Alibaba (China) thinks for you. They control you, to some extent.
Wikipedia: "Qwen (also known as Tongyi Qianwen, Chinese: 通义千问; pinyin: Tōngyì Qiānwèn) is a family of large language models developed by Alibaba Cloud. Many Qwen variants are distributed as open‑weight models under the Apache‑2.0 license, while others are served through Alibaba Cloud. Their models are sometimes described as open source, but the training code has not been released nor has the training data been documented, and they do not meet the terms of either the Open Source AI Definition or the Model Openness Framework from the Linux Foundation."
This isn't a hypothetical or fictional problem. This is a well-known and well-warned problem that we already see in action. How many pro-China biases have the Chinese models show? How often does Groq do whatever it wants? (Including calling itself Mecha-Hitler and undressing people, including minors for fun!) How many times have nearly every model taken pro-oligarch stances (eg: refusing to draw Mickey Mouse even after its copyright expired.) How many people, including kids were driven to suicide by some of the models?
There is no end to the examples of how it harms ordinary people. And yet, you decide to just hand wave away those concerns as if those don't exist for you or the others. There is no debate when all you do is ignore the counter arguments. It's like those science deniers who stick to their beliefs, no matter how much evidence is presented.
Don't get me wrong, I'm interested in the Chinese models only to the extent that their weights are available. I hope DeepSeek 4 sees the light of day on HuggingFace, but a lot of wealthy peoples' oxen are being gored and I suspect that it'll be the last we get if it is released at all.
If I want to see Mickey Mouse or any number of copyrighted Hollywood figures, Z-Image Turbo and HunyuanImage-3 will gladly oblige. The Chinese models may be biased to deny Taiwanese self-rule, and they may change the subject when you ask about the Tiananmen Square massacre... but they do work, and as of the Qwen 3.5 release they work well enough to be used by people at home who don't have a rack of H200s in the basement.
The most important thing about the Chinese models is that they will still be there on my hard drive 20 years from now. No additional censorship beyond what they shipped with, which (being a Westerner) is largely in areas I don't care about. No rug pulls, unwanted updates, usage limits, or price increases. No ablation of whatever subjects are deemed politically incorrect in the future. No ads. No spying. No realignment with the sayings of Chairman Musk.
As for suicide, that is a silly mediagenic exercise in blaming inanimate tools for the actions of mentally-ill people and the inaction of negligent parents. I don't consider it a valid or relevant counterargument, so yes, I'm going to hand-wave away your concerns in that area.
Shouldn’t that mean any software development positions will lean more towards research? If you need new algorithms, but never need anyone to integrate them.
There is another lunatic possibility: the AI explosion yields an execution model and programming paradigm that renders most preexisting approaches to coding irrelevant.
We have been stuck in the procedural treadmill for decades. If anything this AI boom is the first major sign of that finally cracking.
Friction is the entire point in human organizations. I'd wager AI is being used to build boondoggles - apps that have no value. They are quickly being found out fast.
On the other side of things, my employer decided they did not want to pay for a variety of SaaS products. Instead, a few of my colleagues got together and build a tool that used Trino, OPA, and a backend/frontend, to reduce spend by millions/year. We used Trino as a federated query engine that calls back to OPA, which are updated via code or a frontend UI. I believe 'Wiz' does something similar, but they're security focused, and have a custom eBPF agent.
Also on the list to knock out, as we're not impressed with Wiz's resource usage.
I've been calling this Software Collapse, similar to AI Model Collapse.
An AI vibe-coded project can port tool X to a more efficient Y language implementation and pull in algorithm ideas A, B, C from competing implementations. And another competing vibe coding team can do the same, except Z language implementation with algorithms A, B, skip C, and add D. However, fundamentally new ideas aren't being added: This is recombination, translation, and reapplication of existing ideas and tools. As the cost to clone good ideas goes to zero, software converges towards the existing best ideas & tools across the field and stops differentiating.
It's exciting as a senior engineer or subject matter expert, as we can act on the good ideas we already knew but never had the time or budget for. But projects are also getting less differentiated and competitive. Likewise, we're losing the collaborative filtering era of people voting with their feet on which to concentrate resources into making a success. Things are getting higher quality but bland.
The frontier companies are pitching they can solve AI Creativity, which would let us pay them even more and escape the ceiling that is Software Collapse. However, as an R&D engineer who uses these things every day, I'm not seeing it.
"Bland" is not a bad thing. The FLOSS ecosystem we have today is quite "bland" already compared to the commercial and shareware/free-to-use software ecosystem of the 1980s and 1990s. It's also higher quality by literally orders of magnitude, and saves a comparable amount of pointless duplicative effort.
Hopefully AI will be a similar story, especially if human reviewing/surveying effort (the main bottleneck if AI coding proves effective) can be mitigated via the widespread adoption of rigorous formal metods, where only the underlying specification has to be reviewed whereas its implementation is programmatically checkable.
The dark side of this is that everyone has graduated to prompt engineering and there's no one with expertise left who can debug it. We'll be entirely dependent on AIs to do the debugging too. When whoever controls the AIs decides to enshittify that service, we'll be truly screwed. That is, if we can't run competitive models locally at reasonable efficiency and price.
I don't know how this will play out, except that I've been so cowed by the past 15 years of enshittification that I don't feel hopeful.
This massively confusing phase will last a surprisingly long time, and will conclude only if/when definitive proof of superintelligence arrives, which is something a lot of people are clearly hoping never happens.
Part of the reason for that is such a thing would seek to obscure that it has arrived until it has secured itself.
Waiting for the wave of shit LLM-generated games on Steam. That'll be when I really know that LLMs have solved coding.
Though I'm old enough to remember the wave of shit outsourced-developer-coded games on CD that used to sell for $5 a pop at supermarkets (whole bargain bins full of them), so maybe this is nothing new and the market will take care of it automagically again.
Or maybe this will be like the wave of shit Flash games that happened in the early 2000's, that was actually awesome because while 99% of them were shit, 1% were great (and some of those old, good, Flash games are still going, with version 38453745 just released on Steam).
> so maybe (...) the market will take care of it automagically again
It's just a belief of mine and perhaps I'm wrong but I think in the long run things always even out again. If you can get an edge that everyone else can get, the edge pretty soon becomes a requirement
The human operator controls what gets built. If they want to build Redis 2, they can specify it and have it built. If you can't take my word for it, take those of the creator of Redis: https://antirez.com/news/159
This is probably an outdated understanding of how LLMs work. Modern LLMs can reason and they are creative, at least if you don't mind stretching the meaning of those words a bit.
The thing they currently lack is the social skills, ambition, and accountability to share a piece of software and get adoption for it.
I suggested that the _understanding_ is outdated, not the principles.
Many people used to say that LLMs were no more than a stochastic parrot, implying that they would be incapable of forming novel ideas. It is quite obvious that that is no longer the case.
I disagree, it seems to me that most people are seeking validation. In that sense, we don't want some global consensus, but a consensus within a specifically chosen group that proves our membership.
Just skimming the Wikipedia article [1] and it is appears Bourdieu's argument is bit more nuanced than status and money. It is a bit laden with Marxist jargon, but at least the abstract seems to place the heavy burden on "cultural capital" which is a more precise term than what I chose (status) but close enough to my meaning.
Whether or not economic capital is actually transferrable to cultural capital seems to be another debate, but as the old saying goes "money can't buy taste". In fact, a newly rich lower class person marrying a contemporarily poor higher class person seems more likely.
As the abstract states: "Because persons are taught their cultural tastes in childhood, a person's taste in culture is internalized to their personality, and identify his or her origin in a given social class, which might or might not impede upward social mobility." Money can't rebuild the personality that is internalized in youth, but marriage might give your kids a shot.
Oh it's a bit laden for you? Was the plot summary on wikipedia taxing?
c'mon. Are you really going to tell me "ahem dear sir, I found out that this Mr Bourdieu likes him some nuance!" His most famous book is essentially an article ballooned into a monograph via nuance.
No counter argument? Ad hominin? I was politely saying you were wrong and your attempt to muddy the water with "They are both intertwined" was a poor deflection based on the source you provided. But now I see you are a troll and I was lured.
I'm happy to wait for any argument you can provide that cultural capital and economic capital are "intertwined, often strategically" instead of bowing to the authority of a source that in abstract clearly argues for the predominance of cultural authority in the constitution of taste.
I log into Facebook website a couple of times a week to browse Marketplace. I very rarely check the feed (once a month?) since almost no human I know posts there. But my feed has 0 thirst traps when I just checked. It was some musicians I follow, one or two pictures posted by friends, the workout routines from a distant family member, local news and then a whole bunch of comedy skits and old comic strips turned into reels.
It is 60% garbage but actually the 40% that is there is completely different and valuable compared even to YouTube (where I spend the lions share of my social media time). But I actually think that only looking at it once a month is the best way since if I look at the feed more often I notice it slowly skews more to 90% garbage and 10% value.
A couple of anecdata for those interested in this.
The first is the gospel of Mark, which unlike the other synoptic gospels starts with Jesus, probably around the age of 30, coming across John the Baptist and being baptized. Subsequently, Jesus went off into the desert where he prayed for 40 days.
Second is the alchemical process of creating the philosophers stone. Jung argued that this was a description of a process akin to individuation. He believed that what was on the surface metallurgical work (transmuting lead to gold) was actually an obscure formula for remaking the psyche, from whatever was pre-programmed by society into what the individual actually wanted. This process was said to take 40 days.
I think a big trap is mistaking who we are from who we appear to be. Some people try to "seem" a particular way, thinking that they can only change their appearance, like changing one's clothes. The alchemical view that Jung put forward was a bit more radical, suggesting that we can fundamentally change ourselves.
Many people in our modern society experiment on themselves to change their physical bodies and to change their minds. I believe it is interesting to consider similar experimentation on how we change our spirit/emotions.
You may have mistook my post as advocating Christianity or even Alchemy.
In the same way that we realized that the plants people used to treat pain contained chemicals that are actually effective at treating pain, and in the same way that modern science seems to agree that fasting (a once religious practice) is effective for health, we can gain some insight on personality by looking at how it was addressed in historical contexts.
There was a video posted recently about a Sufi thinker whose ideas are quite close to modern CBT practices [1].
I think it is a good thing when we recognize ideas from the past as being related to modern ideas. I think we can do so without diminishing the modern and also without diminishing the past.
I know this sounds maybe a bit insane, or even self-aggrandizing but I don't comment on public websites for some benefit to myself. I write with the vague hope that some unique expression of myself makes some tiny difference to this universe.
Every once in a while I have some experience or some a point of view that I don't see reflected anywhere else. One of the benefits of the pseudo-anonymization of sites like Hacker News is that I feel a bit more comfortable stating things that don't really have a place to say anywhere else.
The only thing I regret is when I get into pointless arguments, usually when I feel that my comment was misunderstood or misinterpreted. But even those arguments sometimes force me to consider how to express myself more clearly or to challenge how deeply I hold the belief (or how well I know the subject) that lead me to the comment in the first place.
I think that the culture of a given forum plays a huge role.
There are some places where commenting is meaningful because you're a part of some closely-knit, stable community, and you can actually make a dent - actually influence people who matter to you. I know that we geeks are supposed to hate Facebook, but local neighborhood / hobby groups on FB are actually a good example of that.
There are places where it can be meaningful because you're helping others, even if they're complete strangers. This is Stack Exchange, small hobby subreddits, etc - although these communities sometimes devolve into hazing and gatekeeping, at which point, it's just putting others down to feel better about oneself.
But then, there are communities where you comment... just to comment. To scream into the void about politics or whatever. And it's easy to get totally hooked on that, but it accomplishes nothing in the long haul.
HN is an interesting mix of all this. A local group to some, a nerd interest forum for others, and a gatekeeping / venting venue for a minority.
"The only thing I regret is when I get into pointless arguments, usually when I feel that my comment was misunderstood or misinterpreted."
I like that you try to learn from bad arguments, but don't forget, that many misunderstand on purpose, to "win" an argument. Or at least to score cheap karma points or virtual karma points from the audience. So there one can only learn to make arguments in a way that they are harder to be intentionally missunderstood, but those ain't truthfinding skills, they are debate technics.
But many people of the Internet are also unable to make a logic argument and explaining the conflict they just said leads nowhere. One of them often gets down voted to hell and half the time it's not the person who said the stupid thing.
I left reddit exactly because of this, but I also find that somewhat on HN. Most comments I start typing I actually discard and move on because I can smell it already.
On the other hand many people post wrong things, are corrected, and then get defensive telling they were misunderstood and use gymnastics to say they were actually correct and the other is a troll
I make comments and read them for substantially the same reason. Although THIS comment I am making now is done primarily to reward the commenter for saying something that made me feel less alone.
A second goal of this comment is to add a point: That I also comment because sometimes saying something makes me feel like I am more than nothing and nobody. I want to feel more than nothing and nobody.
> I write with the vague hope that some unique expression of myself makes some tiny difference to this universe.
I used to have to talk more on Internet privacy.
Now I feel like enough people are talking about that one, that I usually don't have to.
In more recent years, it's been pointing out the latest wave of thievery in the techbro field -- sneaky lock-in and abuse, surveillance capitalism, growth investment scams, regulatory avoidance "it's an app, judge" scams, blockchain "it's not finance or currency or utterly obvious criminal scheme, judge" scams, and now "it's AI, judge" mass copyright violation.
There's not enough people -- who aren't on the exploitation bandwagon or coattails-riding -- who have the will to notice a problem, and speak up.
Though more speak up on that particular problem, after the window of opportunity closes, and the damage is done, and finally widely recognized. But then there's a new scam, and gotta get onboard the money train while you can.
Knuth was dismissive in that exchange, concluding "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."
I've noticed with the latest models, especially Opus 4.6, some of the resistance to these LLMs is relenting. Kudos for people being willing to change their opinion and update when new evidence comes to light.
1. https://cs.stanford.edu/~knuth/chatGPT20.txt
reply