Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ChatGDB – GPT-Powered GDB Assistant (github.com/pgosar)
189 points by pgosar0514 on April 7, 2023 | hide | past | favorite | 54 comments
ChatGDB is a tool designed to superpower your debugging experience with GDB, a debugger for compiled languages. Use it to accelerate your debugging workflow by leveraging the power of ChatGPT to assist you while using GDB!

It allows you to explain in natural language what you want to do, and then automatically execute the relevant command. Optionally, you can ask ChatGPT to explain the command it just ran or even pass in any question for it to answer. Focus on what's important - figuring out that nasty bug instead of chasing down GDB commands at the tip of your tongue.

See it here: https://github.com/pgosar/ChatGDB





Whats the opposite of perl? because that one line's eminently readable


It's called "conversational programming" and it was all the rage in the '60s or '70s. Maybe it's time for a resurgence?


SQL grew out of that trend btw :)


Oh my god, if SQL is the “conversational” option what other design did they consider? Binary?


It's been 50+ years and still nobody has invented anything better that people actually use.


I mean, I’m using ChatGPT to render natural language down to executable SQL for a project right now.

It’s not a universal solution, as it’s specific to my data architecture, but if I were to devote a month to it I’m fairly sure I’d make a big dent in a generalized solution. And I’m VERY sure somebody else already 90% done we’ll see it on GitHub soon!


True, and I'm a fan of that as well. But I'm not sure it counts as dethroning SQL :) Unless you're also counting it as dethroning every other programming language it knows how to code in... although...


Is there something like this for the command line in general? I'd love to have a ? command that takes an optional prompt and feeds my current shell history to chat gpt and gets a recommendation for the last command that failed.

Something like https://github.com/nvbn/thefuck but with some intelligence instead of hard-coded rules.

That's somewhat problematic if you have secrets in there, but you could just not use it if you know you have sensitive things in your shell history. You still need to be aware of that without ChatGPT since most systems save your shell history to the file system where it could remain accessible for much longer than you'd think. IIRC they don't use things from the API for training, but I wouldn't depend on that being safe.



Security isn’t a problem if you’re running the model locally. I came across this project and would love to build an embedded version: https://github.com/BuilderIO/ai-shell


If you feel brave and have credits to burn https://github.com/jla/gpt-shell (shameless auto-promotion)

Nah, seriously don't use it, is just a small experiment, curious thing is that half of the code is the GPT prompts..


This is hilarious - it’s actions are either outright wrong or already supported by the native shell itself.


A simpler and probably better idea is to create an alias to send request to ChatGPT api via curl. Can create multiple aliases, one to explain a command, another to fix a command.


Warp Terminal has an AI command search built in, using natural language. It's pretty nice, if you can get past the fact it requires an account to use it and it's not OSS


Haha, that is really cool. However this is one of those things where I have no idea how to tell if ChatGPT is bullshitting me because I don't know how to GDB on my own.


It's also the type of thing where it's extremely unlikely to bullshit.

And ALSO the type of thing where I think you can work out if it's bullshitting you on your own; if it works, it's not bullshitting.


With technical things "bullshitting" is less of a problem than "wrong version" type of stuff, IME.

GDB... probably a slowly enough moving target, though. ;)

And even on the things I've seen it get wrong, even in higher-churn areas like front-end-Javascript, it's usually "usefully wrong" in that now I've tried one more thing and often end up aware of several more things to look up to try to figure out the right one.


That's why LLMs are best suited to programming. You know pretty quickly if it's lied to you.


I’m sorry, but these arguments are terrible. I see them boiling down to this:

“Use GPT to do this thing!”

“But GPT is notorious for confidently getting things wrong”

“That’s ok! Because you’ll know it’s wrong!”

Maybe I’m a little sore right now because I literally just finished trying to have GPT write code for me and it gave me code with all kinds of syntax errors and mistakes. Yes, I know not to trust it because the compiler flagged the problems, but what now? I go write the code myself I guess. Yay GPT?

What problem is GPT solving here then?


It gets it wrong maybe as often as I do on my first shot, but it does it much faster -- MUCH MUCH faster if any of the tech involved is something I'm not familiar with. And no matter who writes the code, me or the machine, I still quickly read it right after it's written.

Look, I've been doing this for 25 years, both as a career and continuously on the side for fun, and I am definitely an expert in a lot of things at this point. I'm allergic to saying crap like that about myself but feel it's necessary in this case, because it has transformed how I use a computer in a few short weeks.

My very first attempt with this stuff was actually a distinctly miserable experience and then something clicked. Wish I could articulate exactly what, but it's now one of the most joyous experience I've had in my career.


Exactly. I used to dread the 20+ times a day I would have to go to Stack Overflow and wade through a mountain of horse shit in the hopes of finding an answer. Now I can clearly describe the problem as a programmer would and ChatGPT frequently gets the solution on the first or second try.

I had one bad experience last week where I burned 20 minutes as ChatGPT gave me one invalid approach after another. Then I consulted the docs and learned that waht I wanted to do explicitly couldn't be done with that library. Tens steps forward and one step back. Totally worth it.


GPT is excellent for when you already know that the thing you want to do exists but you can't remember the exact syntax or API method. What might take me 5-10 minutes of googling and reading documentation GPT can usually describe with a single statement of what I'm looking for.


You can paste the compiler errors in and it’ll revise. But I think it helps to get a sense for what it can do well and what it can’t. It struggles with stuff that involves gnarly type level shenanigans, for example. But every time I’ve struggled with it I ended up realizing I was asking the wrong question. It’s not a magical device that solves all problems for you. It accelerates you when you know very well where you are and have the right idea of where you want to go. Try using it for boring refactoring tasks.

It’s also not bad with explaining code, so it can help with the “where you are” bit, too.


When the space of possible commands is large like GDB, it can get you the right incantation faster. It’s great for ramping up on a language or tool you are not fluent in; for example “how do you do X in C++”.

It’s not great IME for writing large chunks of code, though some seem to be having good enough results there.


My theory, it's like gambling, sometime's it a payout others it's bland, so it your waiting for the dopamine hit.


I have bad news to tell you about our ability to verify if a program matches a description of what it does on all inputs.


Regardless of whether you wrote it or an LLM did, yeah.


Did anyone use this for an actual project. I am wondering how those chatX applications actually help.

When I had to use a dissasembler the issue was not commands, because those become muscle memory after a few hours of use. (LLM solutions are anti muscle memory because for the same prompt you can get diff results)

The issue I had was wrapping my brain around what is actually going on, keeping track of pointers, addresses and so on.

Maybe this chat solution can help with some of that, but I often am not sure what I actually am looking for so I can’t ask for it. Until I gets to “fix the bug/ find the overflow” I just don’t see the extra value.


I use gbd so rarely that I often forget the commands between sessions. Then I use print debugging when it’s not the best tool just because I don’t want to lookup the commands. I could see the use for this kind of thing.

Same problem when coding shell scripts. It’s like they tried to invent the hardest to remember language. I often use python instead of bash just because I can remember it and read it.


I use GDB all the time… but never directly, always through an IDE’s debugging abilities. So I can’t see any use to tools like this one? Dunno maybe I’m missing something and lots use it directly, but IDE integration is what makes it super useful for JTAG debugging of firmware at work for me.


I do embedded development, so I’m usually attaching a gdb session to the server spawned by this esoteric chip’s programmer/debugger. I could possibly make use of this I suppose.


I think I’m reasonably spoiled by ESP-IDF and the STM32 toolchains integrations into VSCode; aside from launching OpenOCD myself I never have to manage GDB directly


> When I had to use a dissasembler the issue was not commands, because those become muscle memory after a few hours of use.

My few hours of usage is 30 different 5 minute stints over the course of 6 months. I don't remember anything between usages. English debugger prompts will be amazing.


Also give me ChatAPT, the tool that helps you when your Debian package manager is in a broken state. And similar tools ChatPIP and ChatConda. Hell, just give me ChatAdmin.


Feels like the dotcom phase all over again. Next is ChatWithDominoPizza.


Hilariously, one internal version of Bard really really wanted to send you a pizza. My guess is it was trained on the ositivity of /r/Random_Acts_Of_Pizza/, but who knows?


Actually, that might be a future project! I am a bit worried about how much data I'd be sending on each request and how that may eat into tokens though, especially for more troublesome issues.


Yes I'm trying to catch up with the llama crowd - I wouldn't mind giving recursive shell access to a locally hosted, offline language model. On the other hand, hooking up a networked LLM to your networked shell is essentially a RCE by design


Give it a week!


GTP man pages would be awesome. This is pretty close to that for GDB. Although I suspect this is a limited view of the possibilities akin to thinking "the internet sounds neat, I can use it to find new pen pals".

It also makes me wonder if the command line will return as the premiere PC interface. Why would I want to use your web page with all its lovely design and branding when I can just ask the command line to show me the information the way I want?


Seems like it would be very useful to feed it context about the state of the program and ask it to decompile disassembly, explain what kind of fault happened based on clues left in the register state, etc.


Right, I was heavily considering adding that in, but at least for now I elected not to since it would be rather unwieldy to send enough information to output useful information. Not to mention GPT-3 isn't the best at giving you correct information on why your code is broken in most cases.


Next up: AI programs that the analyze kernel code and report on constructs that probably don't work the way we think they do.


I've used summary plugins for bninja and ghidra, can I ask was the results bad when trying to get it to explain a snippet of gdb?


I am sure ChatGPT can deal with GDB. Lately I wanted it to help me with Windows Command line Debugger and it bullshitted half of it.


“ChatGDB, find me the memory leak!” is what I would want to say! Not really dictate commands I don’t know I may even need.


https://github.com/openai/triton/pull/1358#issue-1628393794

>One fun thing - after tracking down the code to the block of C++ code, ChatGPT-4 is what actually found the memory leak for me :)

although I can't imagine what the prompt was.


I don't have gpt4 tho we could test with a prompt like:

"Thinking step by step, what does this code do, is there a problem?"


Ok, this looks pretty slick! I'll try it out next week as I have a few seg faults I need to figure out.


Good looking terminal output. I like the font and color theme.


I believe it's Solarized Dark. The font I don't know


I recognized the 'g' and 'r' from Fira Mono, but from the slashed zero and ligatures (the raised x in 0x), I think the font is Fira Code[1].

I don't think the color scheme is Solarized Dark. Looks closer to Nord[2].

[1]: https://fonts.google.com/specimen/Fira+Code?preview.text=(gd...

[2]: https://github.com/nordtheme/terminal-app


it's great. now you can type "chat stop my code at line 8" instead of "b 8" :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: