Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How much can prompt engineering improve something?
1 point by bigEnotation on Sept 7, 2023 | hide | past | favorite | 2 comments
I've created a Github action to provide code reviews as annotations to PRs [1], but it's providing some pretty odd feedback [2]. I'm not sure if it's my use of openai's functions[3], or my system prompt [4], or if gpt just isn't that good at isolated code reviews.

[1] https://github.com/marketplace/actions/chat-gpt-code-peer-review

[2] https://github.com/edelauna/discord-bot-ai/pull/74/files

[3] https://github.com/edelauna/gpt-review/blob/main/src/openai/utils/make-review.ts#L11

[4] https://github.com/edelauna/gpt-review/blob/main/src/openai/utils/message-manager.ts#L13




It's your prompts they are way to casual also starting your system prompt with "You are a lazy..." will make sure you get crappy output. Tell the AI what you want to do ie. annotate PRs on github and then maybe break it further down. If you don't it expect the full code available that's why you get the unused imports "errors". You are explaining just so it can make errors but not enough to do the task you want it to do. Also scrap the json part... it knows.


That's great, thanks for the feedback, will update the prompt to be more specific.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: