Hacker News new | past | comments | ask | show | jobs | submit login

> By that I mean, you can't tell if it's returning absolute trash or spot on correct.

You use it for things which are 1) hard to write but easy to verify -- like doing drudge-work coding tasks for you, or rewording an email to be more diplomatic, or coming up with good tweets on some topic 2) things where it doesn't need to be perfect, just better than what you could do yourself.

Here's an example of something last week that saved me some annoying drudge work in coding:

https://gitlab.com/-/snippets/2567734

And here's an example where it saved me having to skim through the massive documentation of a very "flexible" library to figure out how to do something:

https://gitlab.com/-/snippets/2549955

In the second category: I'm also learning two languages; I can paste a sentence into GPT-4 and ask it, "Can you explain the grammar to me?" Sure, there's a chance it might be wrong about something; but it's less wrong than the random guesses I'd be making by myself. As I gain experience, I'll eventually correct all the mistakes -- both the ones I got from making my own guesses, and the ones I got from GPT-4; and the help I've gotten from GPT-4 makes the mistakes worth it.




I think you have pointed out the two extremely useful capabilities.

1. Bulky edits. These are conceptually simple but time consuming to make. Example: "Add an int property for itemCount and generate a nested builder class."

Gpt4 can do these generally pretty well and take care of other concerns like updating the hashcode/equals without you needing to specify it.

2. Iterative refactoring. When generating utility or modular code, you can very quickly do dramatic refactoring. By asking the model to make the changes you would make yourself at a conceptual level. The only limit is the context window for the model. I have found that in java or python, the GPT4 is very capable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: