Hacker News new | past | comments | ask | show | jobs | submit login

Is programming even the hard part about programming? In all seriousness, what we would really need from an AI to start really saving me time would be for it to interview all the customers/partners involved on the project, determine the scope of function needed, boil all that down to a set of sensible domain models that make sense to everyone, identify where/when messages need to be passed, determine which things can happen piecemeal as opposed to requiring a bulk op, deciding on a persistence technology based on the nature of the requirements...

And that's just an abbreviated form of what I go through when designing a back-end for a tiny boutique application. 99% of programming is making decisions. When you finally have everything planned, the code can almost write itself, but getting to that point requires so much background knowledge that I'm not sure GPT4 will be hunting for my job anytime soon. Or even really augmenting it. I'd be happier if they could just get auto-complete in VS Code to not suck complete balls.




> the code can almost write itself

My 2c: I've been an eng for around 15 yrs. I semi-recently had a brain injury so haven't been able to dedicate anywhere near as much mental cognition to programming recently. That's why I've been unable to maintain full-time work.

I started using chatgpt around 3 months ago. Initially skeptical, I started giving it fun and weird logical/semantic puzzles to satisfyingly "prove" my intuition that it was insufficient to solve any true problems (and that we humans are still needed!). However, I soon became humbled to its capabilities. There are many many things it cannot do, but I've been amazed at the things it can do if given nuanced and detailed enough prompts. I've realised that, if I prompt it well enough, and use my existing knowledge from those accrued 15yrs, I can get awesome results. I'm working on a small project right now and it's written about ~70% of the code. I've had to make various corrections along the way, but I've found that I can focus on content and larger 'macro' domain logic than annoying (tho intriguing) procedural coding.

It's been so incredibly empowering and freeing to be able to dedicate more brain to _what_ I want to build instead of _how_ I want to build it. My normal build process is now something like this:

- state problem and what you desire, with good specificity

- [optional] give it your current working code, specifying frameworks, rough file structure

- confirm it understands and clarify/correct it if necessary

- [important] ask it to ask _you_ questions for clarification

- ask it for an overview of how it would solve the problem

- (for big tasks, expect a macro rundown)

- (for small tasks, expect or ask for the actual code)

- [important] make specific requests about lang/module/algorithms

- [important] ask it to write test suites to prove its code works and paste in any errors or assertion failures you encounter. It'll surprise you with its advice.

It doesn't replace my need to code, but OMG it makes it so much less burdensome and I'm learning a tonne as well : )


That’s super interesting. I’m recovering from burnout and other health issues and I’ve found it to be occasional helpful in the way you are describing. For me, it can smooth out the process and “lower the intensity” of accomplishing any particular task, especially if it is something where I don’t know how to do it off the top of my head (what libs / functions to use, how to call them, etc). I can then pretty easily correct any mistakes, and I didn’t have to spend 20 mins googling, reading docs, and so on in order to solve the problem.

If you don’t mind, do you have any good examples of how you prompt it? Your process looks pretty nice / robust, it would be cool to see it in action.

Also, have you used gpt-4 much or can you get away with using 3.5 sometimes?


> what libs / functions to use, how to call them

Yeh same. It's got pretty good overview of what libraries are available. I tend to ask it for an npm module to do x and it always has a couple options, and can list pros/cons, and give/modify code to use them.

> have you used gpt-4 much or can you get away with using 3.5 sometimes?

Ah so I _always_ use gpt4. It's in a whole other ballpark IMHO.

> If you don’t mind, do you have any good examples of how you prompt it?

E.g. I would say something like "show me precisely how to set-up, code and deploy a nextjs app that lets a user "...". It tends to be really good at doing simple standalone stuff like todos/colorpickers/blah apps, but you'd be surprised how far it can get with a more advanced problem domain. E.g. I just entered this and it's really impressed me with its output: "Can u show me how to set-up, code and deploy a nextjs app that lets users input a set of sentences into a textarea and receive back clustered sets of sentences (based on semantic similarity) with different colors of the hsl spectrum indicating that similarity." - try it! It gives complete react components, endpoints using tensorflow, and shows how to vary hsl based on weights. I reckon I'd have to make around 20 mins changes to get it working and deployed.


> try it! It gives complete react components,

Maybe I expect more or work on different standards but every time I've tried it it gives low quality code with issues that I rather write it myself. Sometimes it'd give completely wrong answers.

It's just not code I'd commit or let pass a code review.


It's funny to me that you have such an incredibly thoughtful series of prompts and responses to yield optimal results as if you are truly leveraging AI and I am just yelling at it like it's a junior dev on a Slack that I don't like. "Chat! Take this code and make it do X instead! I wanna do Y, do it. Nah, not like that, do it with this thing here."


Yeah, it’s (3.5 and 4) good at writing code overall. I’ve had it write me entire programs (of smaller size) and mostly I just have to correct for library usage (because it’s built on outdated docs) or small things like explicit casting in a language like Python where that’s needed.


...and makes your code public domain (you just admitted here that you're using it)- if anyone accesses app developed by you, they can use it freely without any license.

Worst thing - you're feeding potentially not your code into GPT. That'd be a fireable offense to me (and a very expensive lawsuit for at least couple places I know). Not an issue if you're lone wolf, though.

It's a dystopian thought, but I wouldn't be surprised if Microsoft (which provides such services), when it knows you're using service to create public domain code, could just copy it one-to-one, because hey - free lunch, right?


Can you explain your reasoning behind this statement? Why does it make you code “public domain”?

And why can anyone use an app developed by you “freely without any license”?

Not saying you are wrong, but sweeping claims need some kind of evidence.


See sibling poster for general information. There's also matter of "injecting" copyleft code from generator into your own codebase [1], e.g. GPL3 or AGPL.

Few companies are blocking their employees from using it [2] quoting various reasons, and I know some that aren't big enough to matter in the news, but have done it too.

[1]: https://codeium.com/blog/copilot-trains-on-gpl-codeium-does-... [2]: https://www.hr-brew.com/stories/2023/05/11/these-companies-h...


It's most likely in reference to the recent US copyright office guidance on non-copyrightability of generative AI output: https://www.copyright.gov/ai/ai_policy_guidance.pdf There's already been some discussion here on HN: https://news.ycombinator.com/item?id=35191206

In any case, no one is going to deploy purely AI-generated code in the near future, which would be non-copyrightable. In practice any generated code will be edited by the human developer, and it doesn't take that much creative human input to make the result copyrightable.


That's a very optimistic take, though.

Derivative works aren't as easily re-copyrightable. And we're considering context where programmer copy-pasted code from GPT, so probably it won't be rewritten in most chunks.

There's also other part - if there's proof that partial code (even like 10%) is made using non-copyrightable solution it would be very hard to prove that the rest 90% is.

Until the laws address those issues using any code from generator is a huge liability.


>...and makes your code public domain

Just what is your definition of "public domain"?


Not anything that matches legal reality. That's for sure.


> if anyone accesses app developed by you, they can use it freely without any license.

This is simply incorrect on every level, starting with the fact that (in the US, anyway), you can't place your works in the public domain even if you wanted to.


That's not true in at least one case: if you work for the US federal government, all of your works are automatically in the public domain. Of course, they may also be barred from disclosure for other reasons.

https://en.m.wikipedia.org/wiki/Copyright_status_of_works_by...


You're correct, that's the one exception. Although you could argue that it's not really an exception -- it's that when you're producing IP in the course of your employment, your employer owns the copyright. And if you work for the federal government, your employer is the American people, so in a real sense we collectively hold the copyright. Which is the same as being in the public domain.


Even if it did - the current copyright regime is already so extortionate as to be dystopian, so is it truly a loss?


I don't need it to write code. I don't need it to interview customers. I need it yourself attend endless, pointless, weekly zoom meetings where a manager with zero understanding of the task being done, why its being done, and with no idea how to do it, nevertheless is happy to review open tickets and discuss them.

This task will take 2 months. I'm busy working on it. If you want a 5 minute email every week on how its going, then fine. If you want me to throw my toys when i hit a road-block then I'm for with that.

But no, we need weekly status update meetings with all the other developers, testers, product owners, all wasting their and my time, just because a manager is "managing".

Forget code, that's not the hard part. When the AI can just be my doppelganger in the meeting on my behalf THEN I'll worry about AI taking my job.


> I need it yourself attend endless, pointless, weekly zoom meetings where a manager with zero understanding of the task being done.

You don't need AI for that, you need to brush up your resume and find another job - The biggest regret of my career is not having left places early when the organisation/management style sucked the enjoyment/productivity out of what you do, particularly if everyone else there agrees with you.


My point us that (as yet) AI can't replace my job, so I'm safe. (The job is safe whether I do it or someone else does.)

Now since I work remotely, I am much more likely to be replaced by a cheaper offshore worker. Certainly seems to already have happened to some of the managers I report(ed) to.


“as yet” is doing a lot of work in that first sentence. We all have a gpt number. Like some small number of workers have already been replaced by gpt4, some it will not be until gpt7, some may out code the robots till gpt9.5… Having a higher number doesn’t mean you are a better developer, just that you sit in more meetings and have to use “soft skills” like kissing ass and playing stupid, covering your ass, and other human games that will require more advanced gpt’s.


Are you suggesting LLMs will inevitably gain sentience, consciousness, and the ability to reason deductively at some point in the future?

Recall that the problem with programming isn’t generating more code. Completing a fragment of code by analyzing millions of similar examples is a matter of the practical application of statistics and linear algebra. And a crap ton of hardware that depends on a brittle supply chain, hundreds of humans exploited by relaxed labour laws, and access to a large enough source of constant energy.

All of that and LLMs still cannot write an elegant proof or know that what they’re building could be more easily written as a shell script with their time better spent on more important tasks.

In my view it’s not an algorithm that’s coming for my job. It’s capitalists who want more profits without having to pay me to do the work when they could exploit a machine learning model instead. It will take their poor, ill defined specifications without complaint and generate something that is mostly good enough and it won’t ask for a raise or respect? Sold!


> In my view it’s not an algorithm that’s coming for my job. It’s capitalists who want more profits without having to pay me to do the work when they could exploit a machine learning model instead.

Bingo. This is the real threat, and not just in our industry, but in every industry.


Chat GPT can actually be very useful in brushing up your resume, btw.


In all seriousness I haven't seen the idea of LLMs replacing managerial functions in a while, it would be an interesting inversion of a quasi-post-labor utopia.


AI would likely make great managers. Given competent staff.

Alas while I rag on about incompetent managers, it's not uncommon to find staff who, charitably, need a lot of "management".


The best part of AI management would be that for the first time, the manager would understand what their direct reports are saying.

If I say something like “There may be a compatibility risk due to DNS apex records”, I’ll have to spend hours explaining this to a disinterested non-technical manager. The AI understands the concept and doesn’t need me to explain.


i'm honestly not worried at all about LLMs because most of my jobs have consisted of fixing problems in other peoples' code, and i haven't seen any evidence that an LLM will be capable of doing that on a non-trivial program any time in the near future. I have, however, seen evidence that chatgpt will create many more problems and make my job harder.


> i haven't seen any evidence that an LLM will be capable of doing that on a non-trivial program any time in the near future

Or ever, given that the level of abstraction LLMs work at is completely wrong. They can approximate the syntax of things in their training corpus, but logic? The lights are off and nobody's home.


I've already had the GPT3.5-Turbo model walk through and step-by-step isolate and diagnose errors. They 100% can troubleshoot and correct issues in the code.

Literally you give it the code and the error and it can walk you through finding the solution.

When I say walk you through, I generally mean when you provide it a function but the error is caused by some input that doesn't conform to expectations. If the error were just a defect in the code it can generally point that out instantly.

Obviously GPT4 is even better.


Most bugs I've worked on relate to some weirdness that requires tracking down a specific nonobvious offending function. How would GPT help with that at all? Maybe if you know a particular function is wrong, and ask it to find a bug, but by then most of the work has already been done.


>If the error were just a defect in the code it can generally point that out instantly.

thats not fixing bugs, that's static analysis. Finding the solution to the specific problem that needs to be solved is a lot more difficult than identifying any problem and then solving it.


The bad news is that everyone (including the CEO interviewed) who lords over you don't know that and won't hesitate dumping all that work on you.


Figuring out the logic in code doesn't seem that different from figuring out the logic in other human produced text. At least it doesn't seem harder, if anything it's probably easier for a machine.

Yes, at the moment GPT4 and the like aren't all that good yet, but they have shown that they have started understanding semantics.


It all started this year, just a few months ago, remember? It will get only better from here, don't worry. Or do, not sure. Anyway, you can't avoid it.


> Anyway, you can't avoid it.

That seems certain. It's why I'm putting serious thought into leaving the industry.


Carpeting and nursing looks safe for now. Most other jobs are not. Some are very competitive already, like painting, writing, all sorts of design. Without AI it will be hard to find a job there. Driving, piloting will be mostly automated soon.

Rather then leaving it's better to adapt. AI is just one of technologies in the list. They come and go, that's the specific of IT. Except AI will stay, it will change with the time, but will never go away. Besides, it's the coolest thing right now. And will create new jobs around itself.


I remember >10 years ago thinking that learning another language was probably pointless because google translate is 90% there already and soon will obsolete that knowledge. Or all the people saying trucking is on the verge of being automated.


> google translate is 90% there

90% there? I disagree. 90% would mean that most of the time it gets the translation right. That hasn't been my experience.


I wouldn't be so sure. Combine current capability levels with the larger context windows and they can probably already point out most of the problems with code.

I recently fed a very large file into GPT4 and it handed me a few serious bugs that I hadn't noticed after a few self-reviews.


That's wishful thinking if you ask me


Some code writes itself, and I hope generative AI will help with it, but a lot of code doesn’t. Which is likely why generative AI is so terrible at writing good code.

Of course that doesn’t mean that generative AI won’t see widespread adoption by non-programmers in digitalisation in the coming decade. We already have a lot of “process people” making things with GPT, and those things work. Or at least, they sort of work, but they are also build so terrible that they won’t scale and won’t be maintainable. Which is fine for a while, and it’s probably even fine for the lifetime of some programs. Because let’s be honest, often the quality of the programs that are implemented in non-tech enterprise aren’t important. In fact, often excel “programmers” can frankly do wonders in terms creating business value on short lived automation that won’t need to scale or be maintained in the long run because it’s simply going to be replaced by the time it stops being useful because you’ll have grown to a size where you’re buying SAP or similar (regardless of whether it’s a good idea or not). I do think that a lot of us are going to spend a lot of time “cleaning up” after non-programmers doing GPT programming. Which will be lucrative and boring.

But writing good code for complicated problems? I’m not sure when/if generative AI will be able to handle that. I had hopes until GPT. We use it quite a lot mind you, it writes a lot of our documentation. We have high hopes it’ll eventually get good enough to write a lot of our unit tests as well, and obviously we’re already in a world where a lot of the “trivial” code can be auto-generated, but we were frankly able to do that before generative AI, but actual programming? Heh.


> Is programming even the hard part about programming?

Exactly!

Figuring out Programming challenges isnt really ever part of the work I do. Which is mostly business process stuff.

Comprehending APIs is often a pain. A few times CoPilot has helped by auto-completing the incantation I needed when my brain wasnt working and I couldn't get an understanding from the docs.

So as you say, good autocomplete is all I really need.

That and decent documentation!

There is never a moment where I think I'll break my flow and have a chart conversation to write some code for me. Never.

One thing I did think would be useful: If AI could abstract my already written and duplicated code into testable and robust reusable Classes/Methods for me!


> Comprehending APIs ... good autocomplete ... decent documentation

The ability to ask questions about contents of the documentation as opposed to inefficient RTFM is one of the possibilities for which I recognize LLMs as potentially especially useful. (They should also be able to point to the source like actual "search" though.)


> So as you say, good autocomplete is all I really need.

Maybe this is why I haven't found the great value in ChatGPT. I don't find value in autocomplete, good or otherwise.


Whether this approach works depends a lot on what you are trying to write.

GPT4 is not very good at understanding new algorithms and data structures for example. (I recently tried very hard, but it failed miserably. I can talk about the details, if someone is interested.) But it might be good enough at helping you organise a sprawling project.


Yes I'd like the details on this. My experience has been the opposite of you prompt it correctly, or it has the algorithm or data structure trained in its model already.


I am currently uplifting a 30 year old code to a somewhat newer codebase to try and give it another 20-30 years. I spend a lot of time talking with humans to rediscover the context of the system.

AI is not going to each my lunch. But it sure is handy being able to make sense of some old logic and syntax. It literally saves me hours in having to figure out strange code snippets and such.

GPT is very impressive. But we'll be fine. There's plenty more complexity to come, so be smart, be a part of that complexity.


How does this work & which GPT are you using? So you upload a bunch of code somehow and ask questions about it?


I just read it carefully and put the clangers into chatgpt for some suggestions on different approaches. There's a mix of Perl, old PHP, Aspx and mssql plsql all interacting. So it's less about getting it to do all the work, and just keeping me from getting bogged down in the trivial stuff.


You understand that your generated code is from September 2021 at best, right? Maybe it's okay for some niches, but I see lots of evolution in almost all segments of software engineering, especially frontend and ML.


Agreed, my experience programming has been similar. Simply put the hardest part of most big software projects is just building the right thing.


Interesting take, but I think you're drastically underestimating how much work the programming part is. I think currently at least 90%+ of the work is actually programming / implementing the thing, and that's what AI is going to replace.


I'm not sure about the 90% figure you quote. I'd say it's a lot less from my experience. But even in that "programming part" I'd say the time to implement core functionality follows the Pareto principle. You can probably code up 80% of what you need in 20% of the time. The other 80% ends up being QA and bug fix iteration.


Programming is easy. Asking the right question is hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: