Hacker Newsnew | past | comments | ask | show | jobs | submit | comboy's commentslogin

Perhaps we should start making LLM- open source projects (clearly marked as such). Created by LLMs, open for LLM contributions, with some clearly defined protocols I'd be interesting where it would go. I imagine it could start as a project with a simple instruction file to include in your project to try to find abstractions which can be useful to others as a library and look for specific kind of libraries. Some people want to help others even if they are sharing effectively money+time rather than their skill.

Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.


OpenClaw https://github.com/openclaw/openclaw is effectively that - 1,237 contributors, 19,999 commits and the first commit was only back in November.

Simon, as co-creator of Django, what's your take on this story?

I think this line says everything:

> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.


I love it. Sounds like good advice for submitting a PR to any project!

Please do, that would be amazing.

You'd have to manage the contributions, or get your AI bots to manage them or something, but it would be great to have honeypots like this to attract all the low effort LLM slop.


Moltbook meets GitHub? Sounds like a billion dollar valuation (sarcasm tag deliberately omitted).

Actually, I'd want to see that. All the AI companies keep saying it will take our jobs, human developers won't be necessary.

Well let them put their money where their mouth is. Let's see what happens, see what the agents create or fail to create. See if we end up with a new OS, kernel all the way up to desktop environment.


Eshittification (by Cory Doctorov) is a shitty book but it does explain how that dynamic works.

That's your worldview. Crocker's rules is that you don't have to take receiver feelings into account you just communicate efficently.

No, abiding by Crocker's rules isn't that you don't take into account other people's feelings. It's that other people don't have to take into account your feelings.

Applying them to only one side of the conversation doesn't seem practical.

It perfectly is.

From https://www.lesswrong.com/w/crockers-rules:

> Note that Crocker's Rules does not mean one is authorized to insult people; it means that other people don't have to worry about whether they are insulting you. Crocker's Rules are a discipline, not a privilege. Furthermore, taking advantage of Crocker's Rules does not imply reciprocity. How could it? Crocker's Rules are something you do for yourself, to maximize information received - not something you grit your teeth over and do as a favor.


Alright, my original comment was wrong (as was the parent). I still stand by my opinion that it is not practical though.

Not sure if it's a common knowledge but I've learned not that long ago that you can do "/compact your instructions here", if you just say what you are working on or what to keep explicitly it's much less painful.

In general LLMs for some reason are really bad at designing prompts for themselves. I tested it heavily on some data where there was a clear optimization function and ability to evaluate the results, and I easily beat opus every time with my chaotic full of typos prompts vs its methodological ones when it is writing instructions for itself or for other LLMs.


You can also put guidance for when to compact and with what instructions into Claude.md. The model itself can run /compact, and while I try to remember to use it manually, I find it useful to have “If I ask for a totally different task and the current context won’t be useful, run /compact with a short summary of the new focus”

I ofter wonder if I'm missing something, but shouldn't we be able to edit the context manually???

In that way we could erase prompts and responses that didn't yield anything useful or derailed the model.

Why can't we do that?


Your sim card is an entire computer.

It runs Java!

I highly recommend Aerospace[1], went through a few approaches, I cared about not completely compromising security either, it works really well if you come from something like i3

1. https://github.com/nikitabobko/AeroSpace


Seconding this. I find MacOS unusable without it. I'll ask here because websearch is failing me: is there a way to fix the focus stealing that happens when you have multiple windows of an application on different displays? Specifically, say workspace 1 and 2 are on monitor Left, while 3 and 4 are on Right. Application A has a window on workspace 1, B has one window on 2 and another on 3, and C has a window on 4. Workspace 1 is active on display Left, workspace 4 is active on Right. If I switch to workspace 3 the following happens:

- the switch goes through, Left displays workspace 1, right displays 3 (desired state)

- Application B is focused, presumably because its window on 3 becomes active (also desired)

- Display Left switches to display workspace 2, presumably because it contains a window belonging to the newly focused application B? (I don't want this)

- the window of application B on workspace 2 steals focus from the one on workspace 3 (???)


Thirding the recommendation, and I also have this same issue. It's quite frustrating—but still better than no Aerospace!

So what you’re saying is:

Charlie's paternal grandfather Reginald married twice—first to Mildred, mother of Charlie's father Arthur and his siblings Beatrice (a nun with spiritual godchildren) and Cecil (whose widow Dorothy married Charlie's maternal uncle Edward). What is the name of Charlie's goddaughter?


What is the $/Mtok that would make you choose your time vs savings of running stuff locally?

Just to be clear, it may sound like a snarky comment but I'm really curious from you or others how do you see it. I mean there are some batches long running tasks where ignoring electricity it's kind of free but usually local generation is slower (and worse quality) and we all kind of want some stuff to get done.

Or is it not about the cost at all, just about not pushing your data into the clouds.


Good question. I agree with what I think you're implying, which is that local generation is not the right choice if you want to maximize results per time/$ spent. In my experience, hosted models like Claude Opus 4.6 are just so effective that it's hard to justify using much else.

Nevertheless, I spend a lot of time with local models because of:

1. Pure engineering/academic curiosity. It's a blast to experiment with low-level settings/finetunes/lora's/etc. (I have a Cog Sci/ML/software eng background.)

2. I prefer not to share my data with 3rd party services, and it's also nice to not have to worry too much about accidentally pasting sensitive data into prompts (like personal health notes), or if I'm wasting $ with silly experiments, or if I'm accidentally poisoning some stateful cross-session 'memories' linked to an account.

3. It's nice to be able solve simple tasks without having to reason about any external 'side-effects' outside my machine.


For me it's a combination of privacy and wanting to be able to experiment as much as I want without limits. I'd happily take something that is 80% as good as SOTA but I can run it locally 24/7. I don't think there's anything out there yet that would 100% obviate my desire to at least occasionally fall back to e.g. Claude, but I think most of it could be done locally if I had infinite tokens to throw at it.

i can think of some tasks (classification, structured info extraction) that i _imagine_ even small meh models could do quite well at

on data i would never ever want to upload to any vendor if i can avoid it


Under this name or not I think it's happening regardless..

As any etymology/Latin nerd will tell you, "this name" (MalusCorp) literally translates to EvilCorp, everything about the site is over the top satire. I know Poe's law and all that, but I'm looking askew at commenters in this thread who fail to realize it as either only reading the headline, or are AI-controlled.

Satire points out the absurd


one could say calling a company "palantir" isn't too far off that.

Limited precision of float numbers is deterministic. But there's whole parallelism and how things are wired together, your generation may end up on a different hardware etc.

And models I work with (claude,gemini etc) have the temperature parameter when you are using API.


I agree, but let's say you are looking for a library to solve your problem - you see one repo updated 2 weeks ago and the other one updated 5 years ago - which one do you choose?

That depends. What problem do I have, exactly?

Do I need a library to sort an array? The 5 years ago option is going to be the more likely choice. A library updated 2 weeks ago is highly suspicious.

Do I need a library to provide timezone information? The 2 weeks ago option, unquestionably. The 5 years ago option will now be woefully out of date.


Perhaps some kind of ‘this code is still alive’ flag is key. Even just updating the project. Watching issues. Anything showing ‘active but done’.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: