Hacker Newsnew | past | comments | ask | show | jobs | submit | vanillameow's commentslogin

I think I will try to push at least my more techy friends to a combination of Matrix and Teamspeak (because honestly the Matrix implementation of anything VC/Screenshare/Video is pure ass. A group call on Element right now starts a Jitsi conference. Can we be for real). On CachyOS with Wayland I additionally apparently need OBS with WebRTC for streaming because audio streaming support for Wayland seems to be some sort of circle of hell.

Matrix is kinda jank but I hope Discord enshittification will speed up client development a bit. I am just really fond of the concept of federated servers and self-owned chat history. Prevents hostage holding of chats in the future. For people who don't want to switch I will run a Discord Bridge for now but I do hope to get my main contacts off this software honestly.

For me anything that visibly looks like Discord is a non-starter because I want a product with an actual vision, not someone trying to slopcode an exact replacement of the Discord UI. Imagine if Discord just looked like Skype did in 2008. Yuck. The Matrix protocol, for all its faults, at least has some form of vision.


You might be using old clients that are recommended against, as (AIUI) Jitsi stopped being the default with Matrix 2.0 (released ~late 2024 [0]).

Is it totally fair to blame users? Not entirely, as some features are still being pushed into ElementX. But it's a known problem, with a known solution (finish ElementX and/or wait for other clients to catch up), and a weakness of an open ecosystem.

Moxie wasn't wrong when he said that open ecosystems have to move slower, but I believe it's worth it in the long-run.

[0] https://matrix.org/blog/2024/10/29/matrix-2.0-is-here/#3-nat...


> If writing the code is the easy part, why would I want someone else to write it?

Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.


I don't think this will be an issue, given history. COBOL was developed so that someone higher up could use more human language to write software. (BASIC too? I don't know, I wasn't around for either).

More modern-day, low/no-code platforms are advertised as such... and yet, they don't replace software developers. (in fact, some projects my employer does is migrating away from low/no-code platforms in favor of code, because performance and other nonfunctionals are hidden away. We had a major outage as a result when traffic increased.)


Ultimately, this would lead to.a situation where only the customer-facing (if there are any) or "business-facing" (i.e. C-suite) roles remain. I'm not sure, I like that.

Do you think any of them cares about long term? Regardless of AI, your head is always on a chopping block. You always grab that promo in front of you, even if it means you’ll be axed in two years by your own decisions.

I mean I understand that you want your business to not fall behind right now, sure. But I don't understand people in management who are audibly _excited_ about the prospect of these developments even behind closed doors. I guess some of them imagine they are the next Steve Jobs only held back by their dev teams, but most of them are in for a rude awakening lol. And I guess a lot are just grifting. The amount of psychotic B2B SaaS rambling on Twitter is already unbearable as is.

How does the saying go?

>Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.


Offtopic but, the funny part about that statement is that the only place the socialist ever got the poor's support was in China during its civil war and then only because Mao was directly giving (bribing) them with land in exchange for their support. Everywhere else the main population of socialists were government bureaucrats. Ironically, all authoritarian ideologies (fascism too) happen this way, with low level government officials being the most zealous and the core supporters. The whole "revolution of the people" narrative is just propaganda, like your quote.

"America Added 1000 Millionaires A Day In 2025, 40% Of World’s Millionaires Now In U.S." from https://www.forbes.com/sites/dougmelville/2026/01/05/america...

"Altogether, according to an estimate by UBS Wealth Management, the United States is home to ~22m millionaire households — roughly one of every six households." from https://thehustle.co/originals/the-insane-growth-of-americas...

Looks like the Americans have the right idea and Steinbeck ultimately didn't.


I mean technically I'm a millionaire because I own a house. This said the scale from the time the quote was written till now would mean I need like 20 million +

I feel fairly confident in asserting without bothering to check that there are proportionally more $20-millionaires in the United States than there were in Steinbeck's time as well so Steinbeck is still on the wrong side of history. While capitalism certainly needs to be regulated to curb its worst excesses, far more than it is now, it's abundantly clear that it has outperformed the alternatives.

Yeah I mean idk, my takeaway from OpenClaw was pretty much the same - why use someone's insane vibecoded 400k LoC CLI wrapper with 50k lines of "docs" (AI slop; and another 50k Chinese translation of the same AI slop) when I can just Claude Code myself a custom wrapper in 30 mins that has exactly what I need and won't take 4 seconds to respond to a CLI call.

But my reaction to this project is again: Why would I use this instead of "vibecoding" it myself. It won't have exactly what I need, and the cost to create my own version is measured in minutes.

I suspect many people will slowly come to understand this intrinsic nature of "vibecoded software" soon - the only valuable one is one you've made yourself, to solve your own problems. They are not products and never will be.


"Open source" is no longer about "Hey I built this tool and everyone should use it". It's about "Hey I did this thing and it works for me, here's the lessons I learned along the way", at which point anyone can pull in what they need, discard what they don't, and build out their own bespoke tool sets for whatever job they're trying to accomplish.

No one is trying to get you to use openclaw or nanobot, but now that they exist in the world, our agents can use the knowledge to build better tooling for us as individuals. If the projects get a lot of stars, they become part of the global training set that every coding agent is trained against, and the utility of the tooling continues to increase.

I've been running two openclaw agents, and they both made their own branchs, and modified their memory tooling to accommodate their respective tasks etc. They regularly check for upstream things that might be interesting to pull in, especially security related stuff.

It feels like pretty soon, no one is going to just have a bunch of apps on their phone written by other people. They're going to have a small set of apps custom built for exactly the things they're trying to do day to day.


> Open source" is no longer about "Hey I built this tool and everyone should use it".

Was open source ever about that? I thought it was "Hey I built this tool and I'm putting it on internet if anyone wants to use it" often accompanied by a license saying "no warranties".

> It feels like pretty soon, no one is going to just have a bunch of apps on their phone written by other people. They're going to have a small set of apps custom built for exactly the things they're trying to do day to day

I think today's AI tools like Agents are for people who are programmers but don't want to program, not ones who aren't programmers and don't want to program. As in, "no one is going to..." is a very broad statement to make for an average person who just uses apps on thier phone. Your average person will not start vibe coding their own apps just because they can (because they couldn't care less).


"If the projects get a lot of stars, they become part of the global training set that every coding agent is trained against, and the utility of the tooling continues to increase."

OpenClaw currently has 1.8k issues, 400k lines of code, had an RCE exploit discovered just a few days ago, it takes 5 seconds to get a response when I type "openclaw" in my CLI and most of the top skills are malware. I'm pretty sure training on that repository is the equivalent to eating a cyanide pill for a coding model.

I actually agree with your take that custom apps will take over a subset of established software for some users at some point, but I don't think models poisoning themselves with recklessly vibecoded bloatware is how we get there at all.


Are you me?? I'm literally building highly personalized and/or idiosyncratic software with claude to solve personal and professional problems.

Thanks to tauri, I've now made two desktop apps and one mobile app for the first time in the last two months.

None of this was nearly as feasible just a year ago


It is not about making it yourself but a tradeoff between how much it can be controlled and how much has seen the real world. Adding requirements learned by mistakes of others is slower in self-controlled development vs an open collaboration vs a company managing it. This is the reason vibe-coded(initial requirements) projects feels good to start but tough to evolve(with real learnings).

Vibe-coded projects are high-velocity but low-entropy. They start fast, but without the "real-world learnings" baked into collaborative projects, they often plateau as soon as the problem complexity exceeds the creator's immediate focus.


So, as an OpenClaw disliker, the agent harness at the core of it (pi) is really good, it's super minimal and well designed. It's designed to be composed using custom functionality, it's easy to hack, whereas Claude Code is bloated and totally opinionated.

The thing people are losing their shit over with OpenClaw is the autonomy. That's the common thread between it, Ralph and Gastown that is hype-inducing. It's got a lot of problems but there's a nugget of value there (just like Steve Yegge's stuff)


The core "design" not bad, but the "code" quality is .. mid.

They are basically keep breaking different feature on every release.


What I read is the unlimited token count. You get the most out of this when having it run on an autonomous loop where your interaction is much more minimal? But pinging the thing every minute in a loop is going to terminate your token limit so running the LLM locally is the way to get infinite tokens.

The problem is local models aren't as good as the ones in the cloud. I think the success stories are people who spent like 2-4k on a beefy system to run OpenClaw or these chatbots locally.

The commands they run are, I assume like detailed versions of prompts that are essentially: "build my website." "Invest in stocks." And then watch it run for days.

When using claude code it's essentially a partnership. You need to constantly manage it and curate it for safety but also so the token count doesn't go overboard. With a fully autonomous agent and unlimited token count you can assign it to tasks where this doesn't matter as much. Did the agent screw up and write bad code? The point is you can have the system prompt engage in self correction.


I mean, in not vibecoding it yourself you are already saving tokens... Personally, I see no benefit in having an instance of something like this... so, I wouldn't spend tokens, and I wouldn't spend server-time, or any other resource into it, but a lot of people seem to have found a really nice alternative to actually having to use their brains during the day.


> a lot of people seem to have found a really nice alternative to actually having to use their brains during the day.

Or have they have found a way to use their brains on what they deem as more useful, and less on what is rote?


I see this retort pasted everywhere. What exactly are you referring to? I think it's fair to assume any competent person never spends their brain in what may be considered as rote in the first place. If one was doing that, well it's unfortunate.

I just keep coming to the conclusion about devs who use agents or other AI tooling extensively: these are programmers who did not like to program.


Yeah, I guess I just don't really have a lot of meaningful things to take care of.


I do see the potential in something like OpenClaw, personally, but more as a kind of interface for a collection of small isolated automations that _could_ be loosely connected via some type of memory bank (whether that's a RAG or just text files or a database or whatever). Not all of these will require LLMs and certainly none of them will require vibecoding at all if you have infinite time; But the reality is I don't have infinite time, and if I have 300 small ideas and I can only implement my like 10 of them a week by myself, I'd personally rather automate 30 more than just not have them at all, you know?

But I am talking about shell scripts here, cronjobs, maybe small background services. And I would never dare publish these as public applications or products. Both because I feel no pride about having "made" these - because, you know, I haven't, the AI did - and because they just aren't public facing interfaces.

I think the main issue at the moment is that so many devs are pretending that these vibecoded projects are "products". They are not. They are tailor-made, non-recyclable throwaway software for one person: The creator. I just see no world at the moment where I have any plausible reason to use someone else's vibecoded software.


Our team doesn't use things like OpenClaw. We use Windmill, which is a workflow engine that can use AI to program scripts and workflows. 90% of our automated flows are just vanilla python or nodejs. We re-use 10% of scripts in different flows. We do have LLM nodes and other AI nodes, and although windmill totally supports AI tool calling/Agentic use, we DON'T let AI agents decide the next step. Boring? Maybe. Dependable? Yes.


Been looking at this over the weekend. It genuinely seems like it could have some really cool use cases. However I just don't trust an AI enough to run unprompted with root access to a machine 24/7, even if it's sandboxed. As soon as I willingly integrate data into it, the sandboxing doesn't really matter, especially when I ask it to decide for itself how to process that data (which seems to be what they want you to do with it? Ask it to define its own skills?)

Most of the cool stuff here, i.e. automatic news or calendar summaries or hue light controls or Discord bot integration or what not, you can also just "vibecode" in an afternoon using regular Claude code. If you actually review said code, you then have the peace of mind of knowing exactly what gets triggered when. I don't really feel comfortable enough to give that control away yet.

And I also feel like the people who _do_ feel comfortable giving this control away also strongly overlap with people who really don't have the understanding to make an informed decision on it...


Sometimes when I use a plugin like this I get reminded just how much of a productivity nerf it is to code without an autocomplete AI. Honestly in my opinion if you write a lot of boilerplate code this is almost more useful than something like Claude Code, because it turbocharges your own train of thought rather than making you review someone else's, which may not align with your vision.

This is a really good plugin. I'm a diehard JetBrains user, I tried switching to VSCode and its various forks many times because of AI but muscle memory from years of use is hard to override. And for a lot of languages JetBrains is just much better, especially out of the box. But they dropped the ball so hard on AI it's unbelievable. Claude Code pulled it back a bit because at least now the cutting edge tools aren't just VSCode plugins, but I was still missing a solid autocomplete tool. Glad this is here to fill that niche. Very likely will be switching my GitHub copilot subscription to this.

I also really appreciate publishing open weights and allowing a privacy mode for anonymous trial users, even if it's opt-in. Usually these things seem to be reserved for paying tiers these days...


I have always said this and had people on HN reply that they don't get much use out of autocomplete, which puzzled me.

I'm starting to understand that there are two cultures.

Developers who are mostly writing new code get the most benefit from autocomplete and comparatively less from Claude Code. CC is neat but when it attempts to create something from nothing the code is often low quality and needs substantial work. It's kind of like playing a slot machine. Autocomplete, on the other hand, allows a developer to write the code they were going to write, but faster. It's always a productivity improvement.

Developers who are mostly doing maintenance experience the opposite. If your workflow is mostly based around an issue tracker rather than figma, CC is incredible, autocomplete less so.


I personally find autocomplete to be detrimental to my workflow so I disagree that it is a universal productivity improvement.


I’m in the “write new stuff with cc and get great code.” Of course I’ll be told I don’t really know what I’m doing. That I just don’t know the difference between good and bad quality code. Sigh.


The best code is no code.[0]

The main "issue" I have with Claude is that it is not good at noticing when code can be simplified with an abstraction. It will keep piling on lines until the file is 3000 lines long. You have to intervene and suggest abstractions and refactorings. I'm not saying that this is a bad thing. I don't want Claude refactoring my code (GPT-5 does this and it's very annoying). Claude is a junior developer that thinks it's a junior. GPT-5 is a junior developer that thinks it's a senior.

[0]: https://www.folklore.org/Negative_2000_Lines_Of_Code.html


Definitely agree here, have had so many cases where I would like ask Claude for XYZ, then ask for XYZ again but with a small change. Instead of abstracting out the common code it would just duplicate the code with the small change.


I agree with you. I do have some local model LLM support configured in Emacs, as well as integration with gemini 3 flash. However, all LLM support is turned off in my setup unless I specifically enable it for a few minutes.

I will definitely try the 1.5B model but I usually use LLMs by taking the time to edit a large one-shot prompt and feed it to either one of the new 8B or 30B local models or to gemini 3 flash via the app, web interface, or API.

Small purpose-built models are largely under-appreciated. I believe that it is too easy to fall into the trap of defaulting to the strongest models and to over rely on them. Shameless plug: it is still incomplete, but I have released an early version on my book ‘Winning Big With Small AI’ - so, I admit my opinions are a little biased!


do you know a good way to give context files in emacs? currently i have to either do ctrl-x+h to select file content of individual files to give to ai or copy files themselves from ai's chat interface. I would much prefer selecting all files at once and get their content copied to clipboard.


Yep. I'm coming to resent Claude Code and tools like it for taking me out of direct contact with the code.

I think we're still in the early days of these systems. The models could be capable of a lot more than this "chat log" methodology.

Agree about JetBrains dropping the ball. Saddens me because I've also been a diehard user of their products since 2004.


Glad to hear I'm not alone, the latest releases of JetBrains have been so bad I finally cancelled my subscription. VSCode has been a nice surprise, "its giving old emacs" as the kids would say.


I am curious about how both of you think Jetbrains is dropping the ball so much that you are no longer buying the tool.

You are still using it but no longer getting updates?


I use free RustRover for my open source work. I have not purchased a (new) license for my commercial work, because I haven't been getting as much value as it since my flow has switched to primarily agentic.

Mainly, they're pushing Junie and it just isn't that good or compelling, when faced off against the competition.

The key thing for me is that I think they had an opportunity here to really rethink how LLMs could interact with an editor since they potentially controlled both the editor and the LLM interaction. But they just implemented another chat-based interaction model with some bells and whistles, and also were late getting it out really, and what they delivered seemed a bit meh.

I was hoping for something that worked more closely inside the editing process, inline in the code, not just completions and then an agentic log alongside.

I also don't like that I can't seem to get it to work with 3rd party LLM providers, really. It seems to allow specifying an OpenAI API compatible endpoint, but it's janky and doesn't seem to allow me to refresh and manage the list of models properly?

It just still seems half-baked.

I love Opus and I am a heavy CC user now, but I don't like that Claude Code is taking me out of my IDE, away from hands on with the code, and out of my editing process.And I don't like how it tries to take over and how weak its review flow is. I end up almost always with surprises during my review process, despite my finding the quality of its code and analysis quite good. To me there was a real chance here for a company like JetBrains to show its worth in applying AI in a more sensible way than Anthropic has.

VSCode and Zed have no appeal to me though. I've mostly gone back to emacs.

In the meantime, their IDEs themselves feel a bit stalled in terms of advancement. And they've always suffered from performance problems since I started using them over 20 ago.


> I have not purchased a (new) license for my commercial work, because I haven't been getting as much value as it since my flow has switched to primarily agentic.

I still buy a personal Ultimate license because I want to see them succeed even if like 80% of my time is spent either in a CLI or Visual Studio Code (for quicker startup and edits), a bit unfortunate that Fleet never got to be really good but oh well.


Also wish Fleet took off, not a fan of installing a new IDE for every separate repo that's in a different language


Over the last two decades I've given them quite a bit a money on personal subscriptions, and indirectly a lot more through employer purchases on my behalf.

I dislike VSCode very much, but I do think the foundational pieces of the JetBrain's IDEs are starting to show their age.


I've done some testing before and many of the new Jetbrains internal plugins cause memory leaks which really lags down my IDE...


Hopefully not too offtopic: why so much boilerplate?

I see most would-be-boilerplate code refactored so the redundant bit becomes a small utility or library. But most of what I write is for research/analysis pipelines, so I'm likely missing an important insight. Like more verbose configuration over terse convention?

For code structure, snippets tempting[1] ("iff[tab]" => "if(...){...}") handles the bare conditional/loop completes in a more predictable way and offline/without a LLM eating into RAM.

[1] https://github.com/joaotavora/yasnippet; https://github.com/SirVer/ultisnips; https://code.visualstudio.com/docs/editing/userdefinedsnippe...


Abstracting away redundancy could make it harder to understand exactly what the code is doing, and could introduce tech debt when you need slightly different behavior from some code that is abstracted away. Also, if the boilerplate code is configuration, its good to see exactly what the configuration is when trying to grok how some code works.

You bring up a good point with snippets though, and I wonder if that would be good information to feed into the LLM for autocomplete. That snippet is helpful if you want to write on condition at a time, but say you have a dozen conditions if statements to write with that snippet. After writing one, the LLM could generate a suggestion for the other 11 conditions using that same snippet, while also taking into consideration the different types of values and what you might be checking against.

As for RAM/processing, you're not wrong there, but with specialized models, specialized hardware, and improvements in model design, the number of people working under such restricted environments where they are concerned about resource use will decrease over time, and the utility of these tools will increase. Sure a lower-tech solution works just fine, and it'll continue to work fine, but at some point the higher-tech solution will have similar levels of friction and resource use for much better utility.


Junie is irredeemable but if it's autocomplete that you are unhappy about, IntelliJ has both local- and cloud autocomplete now.


It is depressing that our collective solution to the problem of excess boilerplate keeps moving towards auto-generation of it.


If you really want to deliver polished products, you still have to manually review the code. When I tried actually "vibecoding" something, I got exhausted so fast by trying to keep up with the metric tons of code output by the AI. I think most developers agree that reviewing other people's code is more exhausting mentally than writing your own. So I doubt those who see coding as too mentally straining will take the time to fully review AI written code.

More likely that step is just skipped and replaced with thoughts and prayers.


I do manually review. I don't think the quality of my output has reduced even slightly. I'm just able to do much more. I deliver features more quickly, and I'm making more money, so of course I'm happy. If there was no money in programming I wouldn't be doing it, I think that's the major distinction. I barely have any understanding of how a CPU works, I don't care. I build stuff and people are very happy with what I build and pay me money for it...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: