Hacker News new | past | comments | ask | show | jobs | submit login

Have built multiple MVP-scale node web apps, python scripts and two native iOS apps with it, not having known the languages beforehand.

The only skill I brought to the table was app architecture.

It's not just hype.




It's a great and very useful tool, but the hype around just feels significantly blown out of proportion.

I also built multiple things with it and I always came to a point, where it just couldn't handle anything slightly larger than a mvp, or a non guided change requiring editing multiple files at once


In my experience this is just too big of a prompt to give it.

The best way to use it is to make everything as atomic as possible. Ask it for one function at a time, rather than "make my app handle user auth"


Of course one can dumb down the requirements so that it will handle, but what about "once basic auth is added, check which endpoints should require it and by what clients" - any real work is out of the scope currently


This is why Prompt Engineering is a legitimate profession.

In the future, you could prompt GPT10 with "give me a marketing plan" and its output would be just as terrible as GPT3's.

Leveling up one's prompting skill from zero shot to few shot to agenetic is how you get usable results.


I would love to see those code bases and the commit histories. We had code scaffolding and code generators well before LLMs. Just as we had autocomplete before LLMs.


This is probably one of the most significant impacts LLMs will have on SW development. Programming languages, frameworks, APIs, and runtimes will become less relevant to humans, and will probably be optimized for LLM use. DX is moving up the stack.


Ok, so you're a library developer and create a greenfield API.

What do you do, in order for chatgpt to be able to pick up your library and it's patterns? What obstacles I see in this scenario:

* Base models takes months and millions to train

* RLHF supposedly can add knowledge, but it's disputed to mostly "change style"

* What incentive will OpenAI have to include your particular library's documentation?

I imagine, if that library starts being really popular, a lot of other code will include examples how to you use it. What about before that?

Including new knowledge always lags (are there two gpt updates per year? maybe a up to 4, but not really significantly more) few months, so what about a fast moving agile greenfield project? It could cause frustration in LLM users (I know I have been bitten a lot by some python library changes already).

It seems that it's just another tool in the box for humans to use. In far far future maybe, when we somehow get around those millions of dollars for fine tuning (doubtful) and/or libraries simply stop changing.

But still, put any really not small code base into 120k token context and see how easy both gpt and cluade opus trip up on themselves. It's amazing, when it works, but currently it's a roll of a dice still


Did you use chatgpt, copilot or any open source model?


Vanilla ChatGPT, not even Cursor




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: