Hacker News new | past | comments | ask | show | jobs | submit login

Very cool! I think you're on to something here but I don't feel it's interesting/good enough yet to move me away from OpenAI's shitty chat interface.

I do some prompt engineering on a daily basis, and I find that the challenge is often less about the writing and more about ensuring that the output is what I want, and consistently so.

Trying to think of some useful features for this tool, I'm thinking that some improved tooling for testing output could be useful. One thing is to validate the output, but also to be able to run the same query in bulk to check that the output always validates correctly. The model often outputs something correctly one time, and incorrectly another time, for the same prompt.

Just some thoughts from my side!




Johoba, thanks for the feedback. We noticed that there is a lack of tooling in this area, and want to build a better experience for you.

One of Wale IDE's features is to run your prompts over an CSV of data - see "Import data or start from scratch" on our landing page. Give this a try and see if it helps with testing your prompts.

Let us know if you would be open to a short 15 min call: link: https://cal.com/zach-zhao/20min?duration=20.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: