Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree, I usually put this sort of information in the commit message itself. That way it's right there if anybody ever comes across the line and wonders "why did he write this terrible code, can't you just ___".




As a side note, it's becoming increasingly important to write down this info in places where LLMs can access it with the right context. Unfortunately commit history is not one of those spots.

There's no reason that an LLM couldn't (or isn't) being trained on commit messages.

No difference between a git index and any other binary data (like video).


> There's no reason that an LLM couldn't (or isn't) being trained on commit messages.

You are arguing that it could. Hypotheticals.

But getting back to reality, today no coding assistant supports building system prompts from commit history. This means it doesn't. This is a statement of fact, not an hypothetical.

If you post context in commit messages, it is not used. If you dump a markdown file in the repo, it is used automaticaly.

What part are you having a hard time understanding?


You seem to be confusing the construction of system prompts with "training". Prompts do not change a model's weights or train them in any way. Yes they influence output, but only in the same way different questions to LLMs (user prompts) influence output. Just because it's not available in current user interfaces to use commit messages as a prompt does not mean the model wasn't trained with them. It would be a huge failure for training from version controlled source code to not include the commit messages as part of the context. As that is a natural human language description of what a particular set of changes encompasses (given quality commits, but quality is a different issue).

> You seem to be confusing the construction of system prompts with "training".

I'm not. What part are you having a hard time following?


> But getting back to reality, today no coding assistant supports building system prompts from commit history. This means it doesn't. This is a statement of fact, not an hypothetical.

This is a non-sequiteur. Just because coding assistants don't support building system prompts from commit history doesn't mean LLMs and coding assistants aren't trained on commit messages as part of the massive number repositories they're trained on.

What part are you having a hard time following?


> As a side note, it's becoming increasingly important to write down this info in places where LLMs can access it with the right context. Unfortunately commit history is not one of those spots.

This is the comment that spawned this tragedy of miscommunication.

My interpretation of this comment is that no current programming agents/llm tooling utilize commit history as part of their procedure for building context of a codebase for programming.

It is not stating that it Cannot, nor is it making any assertion on whether these assistants can or cannot be Trained on commit history, nor any assertion about whether commit history is included in training datasets.

All its saying is that these agents currently do not automatically _use_ commit history when finding/building context for accomplishing a task.


This is hair-splitting, because it's technically not a part of _system prompt_, but Claude Code can and does run `git log` even without being explicitly instructed to do so, today.

There are MCP Servers that give access to git repo information to any LLM supporting MCP Servers.

For example:

>The GitHub MCP Server connects AI tools directly to GitHub's platform. This gives AI agents, assistants, and chatbots the ability to read repositories and code files, manage issues and PRs, analyze code, and automate workflows. All through natural language interactions.

source: https://github.com/github/github-mcp-server


Also there's a lot of humans that won't look at the commit history, and in many cases if the code has been moved around the commit history is deep and you have to traverse and read potentially quite a few commits. Nothing kills the motivation more than finally finding the original commit and it mentioning nothing of value. For some things it's worth the cost of looking, but it's ineffective often enough that many people won't bother

The OP solved this problem by generating a well-known url, hosting it publicly, and including a link to the commit in the cursed knowledge inventory.

I usually spot these kind of changes through git blame whenever I find a line suspicious and wonder why it was written like that

You are sadly completely missing the point of ever-self-improving automation. Just also use the commit history. Better yet: don't be a bot slave that is controlled and limited by their tools.

> You are sadly completely missing the point of ever-self-improving automation. Just also use the commit history.

I don't think you understand the issue you're commenting on.

It's irrelevant whether you can inject commit history in a prompt.

The whole point is that today's support for coding assistants does not support this source of data, whereas comments in source files and even README.md and markdown files in ./docs are supported out of the box.

If you rely on commit history to provide context to your team members, once they start using LLMs this context is completely ignored and omitted from any output. This means you've been providing context that's useles and doesn't have any impact on future changes.

If you actually want to help the project, you need to pay attention on whether your contributions are impactful. Dumping comments into what amounts to /dev/null has no impact whatsoever. Requiring your team to go way out of their way to include in each prompt extra context from a weird source that may or may not be relevant is a sure way to ensure no one uses it.


And my answer is: stop being a user when you want to be a developer so bad. Write the tool you need.

(we certainly did with our company internal tool, but then we're all seniors who only use autocomplete and query mechanisms other than the impractical chat concept)


That sounds like work someone should get paid to do.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: