Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are you basing this on?

I've played with code generation from AIs it works sometimes but confidently produces bugs and doesn't scale at all.

I think another giant leap is required to get to the point that humans are just tweaking. I'm saying we won't get there but what in the pipeline in the next 24 months that is going to get us there?



We all confidently write bugs and discover them in testing. The same process has been implemented for GPT, for example GPT Engineer, you can also instruct the Code Interpreter model for it in GPT Plus and it works. I see some people are not up to date here.

What I base my guess on: the fact we already have GPT apps that write apps and clearly it works fine, and as we like to say "this is the worst it'll ever be". When people say "it's not very good right now, it produces garbage" I only see people who are not used with the speed of progress right now. Midjourney a year ago produced weird abstract doodles that only looked like images from distance. Now it produces photorealistic art that's taking jobs. One year.

What we need: Bigger context, expert models in constellation and scale. You need nothing else. Of course some architectural modifications will emerge in the journey of achieving this, but that's a comparatively trivial constraint to solve, it's just normal software engineering as we've done for decades, but this time for models.


Just because you happen to do a trivial and menial job, it doesn't mean that every software developer does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: