I've seen many people say "I don't get the hype", so here's my attempt to explain it. I've been working in technology and software companies my entire life, but not as a developer.
Two days ago, I submitted and had my first pull request merged to an open source project (Clawdbot) thanks to my AI assistant rei.
A short story: rei suddenly stopped responding in some Slack channels. So I asked it to help me troubleshoot.
We traced the issue: adding custom instructions in one Slack channel incorrectly stopped it from replying in all the others.
I considered reporting the issue in GitHub, but then I thought, "Well... what if we just try to fix it ourselves, and submit a PR?"
So we did. We cloned the codebase, found the issue, wrote the fix, added tests. I asked it to code review its own fix. The AI debugged itself, then reviewed its own work, and then helped me submit the PR.
Hard to accurately describe the unlock this has enabled for me.
Technically, it's just an LLM call, and technically, I could have done this before.
However there is something different about this new model of "co-working with AI that has context on you and what you're doing" that just clicks.
I can't parse this story. "rei" stopped working and you asked "rei" or "clawdbot" to help your troubleshoot? Are you using both? Whos is 'we' in the "we fixed it ourselves" substory?
Clawdbot is the software, they installed their own instance of it and named it "rei". So an instance of Clawdbot named rei helped them to fix a problem in Clawdbot/rei they observed.
i disagree with your dropbox example. dropbox is apprently easier to use than a selfhost ftp site and well maintained by a company. but this clawedbot is just a one-man dev developed project. there are many similar "click to fix" services.
Not exactly, clawdbot is an open source project with hundreds of contributors (including me!) in only 3 weeks of its existence. Your characterization of just a one-man dev developed project is inaccurate.
I'm genuinely sorry you think that, and it's not my intention to offend you.
However your comment reads exactly like you saying to a Dropbox user "This is a user going to rsync, setting up a folder sync in a cron job, running the cron job, and saying "wow isn't dropbox great".
Sometimes the next paradigm of user interface is a tweak that re-contextualizes a tool, whether you agree with that or not.
This is a GitHub user on GitHub using a GitHub feature through the GitHub interface on the GitHub website that any GitHub user with a GitHub project can enable through GitHub features on GitHub.
And the person is saying "my stars! Thanks clawdbot"
There's obviously an irrational cult of personality around this programmer and people on this thread are acting like some JW person in a park.
First those are completely different sentiments. One is a feature built into the product in question the other is a hodgepodge of shit.
Second, and most importantly, Dropbox may as well not exist anymore. It’s a dead end product without direction. Because, and this is true, it was barely better than the hodgepodge of shit AND they ruined that. Literally everything can do what Dropbox does and do it better now.
What specific aspect of this is a GitHub feature? Can you link to the documentation for that feature?
The person you're replying to mentions a fairly large number of actions, here: "cloned the codebase, found the issue, wrote the fix, added tests. I asked it to code review its own fix. The AI debugged itself, then reviewed its own work, and then helped me submit the PR."
If GitHub really does have a feature I can turn on that just automatically fixes my code, I'd love to know about it.
> We cloned the codebase, found the issue, wrote the fix, added tests. I asked it to code review its own fix. The AI debugged itself, then reviewed its own work, and then helped me submit the PR.
Did you review the PR it generated before it hit GitHub?
Two days ago, I submitted and had my first pull request merged to an open source project (Clawdbot) thanks to my AI assistant rei.
A short story: rei suddenly stopped responding in some Slack channels. So I asked it to help me troubleshoot.
We traced the issue: adding custom instructions in one Slack channel incorrectly stopped it from replying in all the others.
I considered reporting the issue in GitHub, but then I thought, "Well... what if we just try to fix it ourselves, and submit a PR?"
So we did. We cloned the codebase, found the issue, wrote the fix, added tests. I asked it to code review its own fix. The AI debugged itself, then reviewed its own work, and then helped me submit the PR.
Hard to accurately describe the unlock this has enabled for me.
Technically, it's just an LLM call, and technically, I could have done this before.
However there is something different about this new model of "co-working with AI that has context on you and what you're doing" that just clicks.