I've been trying to work on a new LLM code editor that does just that. When you instruct it to do something, it will evaluate your request, try to analyze the action part of it, the object, subject, etc, and map them to existing symbols in your codebase or, to expected to be created symbols. If all maps, it proceeds. If the map is incomplete, it errors out stating that your statement contained unresolvable ambiguity
I think there is a real benefit here, and it might be the actual next beneficial grounded AI sustainable use in programming. Since I the current "Claude code and friends" are but a state of drunkenness we fell into after the advent of this new technology, but it will prove, with time, that this is not a sustainable approach
My main gripe with tmux is the nested use case (tmux session on my local machine, in which I ssh to another machine, only to tmux attach within the remote machine too). Is there a terminal multiplexer/session daemon that supports nested sessions out of the box with ease?
I wrote quite a bit of configuration to support an "outer" tmux process and "inner" tmux processes on all the remote hosts I have various and different tasks to accomplish. I am not sure how some software would manage these, but in the very least configuring my outer session to use Ctrl+a while the inner one uses Ctrl+b is working well. I have aliases that specify a socket so I can refer to the sessions easily and not get them confused.
Btw seems like NullClaw is facing the same issue. Currently the 1st result on Google is a shady website with popups, claiming to be NullClaw's (while the actual site (nullclaw.io) is not coming up
I'm yet to find a satisfying vim AI integration. I want something that blends into my vim workflow, and does not require me to switch Windows and copy paste or reload my open buffers after AI agents edit my code.
For instance I would love for it to seamlessly melt into a "highlight comments/pseudo code" -> some keybind, then AI would expand those to actual code for instance, or I don't know.. but something not like what we have currently
Try CodeCompanion if you're using neovim. I have a keybind set up and takes the highlighted region, prepends some context which says roughly "if you see a TODO comment, do it, if you see a WTF comment, try to explain it", and presents you an inline diff to accept/reject edits. It's great for tactical LLM use on small sections of code.
For strategic use on any larger codebase though, it's more productive to use something like plan mode in Claude code.
I found this to be inaccurate, I can run OSS GPT 120B (4 bit quant) on my 5090 and 64 ram system with around 40 t/s. Yet here the site claims it won't work
Although I agree with the general sentiment, but I'll slightly push back on the "nobility" of any engineering pursuit. Such things are highly amoral (not immoral) and context specific
Assume an "Evil" state worked on defensive technology that can foil any nuclear attacks against it. Now, this allows this "Evil" state to use it's own nuclear weapons without fear of retaliation. So in this example the innovation made in defensive technologies allowed for war and destruction
Well of course, which is why we prohibited the development of defence tech in the ABM treaty. But that doesn't stop non-nuclear states from developing anti-nuke defence technology. Perhaps the only reason why they don't is because it's harder than building a nuke.
I've been trying to work on a new LLM code editor that does just that. When you instruct it to do something, it will evaluate your request, try to analyze the action part of it, the object, subject, etc, and map them to existing symbols in your codebase or, to expected to be created symbols. If all maps, it proceeds. If the map is incomplete, it errors out stating that your statement contained unresolvable ambiguity
I think there is a real benefit here, and it might be the actual next beneficial grounded AI sustainable use in programming. Since I the current "Claude code and friends" are but a state of drunkenness we fell into after the advent of this new technology, but it will prove, with time, that this is not a sustainable approach
reply