Hacker News new | past | comments | ask | show | jobs | submit login

I like this comment, it is exceptionally insightful.

Any interesting question is "How is programming like trading securities?"

I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

What trading software has a hard time doing is coming up with new securities. What LLMs absolutely cannot do (yet?) is come up with novel mechanisms. To illustrate that, consider the idea that an LLM has been trained on every kind of car there is. If you ask it to design a plane it will fail. Train it on all cars and plans and ask it to design a boat, same problem. Train it on cars, planes, and boats and ask it to design a rocket, same problem.

The sad truth is that a lot of programming is 'done' , which is to say we have created lots of compilers, lots of editors, lots of tools, lots of word processors, lots of operating systems. Training an LLM on those things can put all of the mechanisms used in all of them into the model, and spitting out a variant is entirely within the capabilities of the LLM.

Thus the role of humans will continue to be to do the things that have not been done yet. No LLM can design a quantum computer, nor can it design a compiler that runs on a quantum computer. Those things haven't been "done" and they are not in the model. The other role of humans will continue to be 'taste.'

Taste, as defined as an aesthetic, something that you know when you see it. It is why for many, AI "art" stands out as having been created by AI, it has a synthetic aesthetic. And as one gets older it often becomes apparent that the tools are not what determines the quality of the output, it is the operator.

I watched Dan Silva do some amazing doodles with Deluxe Paint on the Amiga and I thought, "That's what I want to do!" and ran out and bought a copy and started doodling. My doodles looked like crap :-). The understanding that I would have to use the tool, find its strengths and weaknesses, and then express through it was clearly a lot more time consuming than "get the tool and go."

LLMs let people generate marginal code quickly. For so many jobs that is good enough. People who can generate really good code taking in constraints that the LLM can't model, is something that will remain the domain of humans until GAI is achieved[2]. So careers in things like real-time and embedded systems will probably still have a lot of humans involved, and systems where every single compute cycle needs to be extracted out of the engine is a priority, that will likely be dominated by humans too.

[1] Very early on there were papers on 'genetic' programming. Its a good thing to read them because they arrive at a singularly important point, "How do you define 'Which is better'?" For a solid, qualitative and testable metric for 'goodness' genetic algorithms out perform nearly everything. When the ability to specify 'goodness' is not there, genetic algorithms cannot out perform humans. What is more they cannot escape 'quality moats' where the solutions on the far side the moat are better than the solutions being explored but they cannot algorithmically get far enough into the 'bad' solutions to start climbing up the hill on the other side to the 'better' solutions.

[2] GAI being "Generalized Artificial Intelligence" which will have to have some way of modelling and integrating conceptual systems. Lots of things get better then (like self driving finally works), maybe even novel things. Until we get that though, LLMs won't play here.




> LLMs let people generate marginal code quickly.

What's weird to me is why people think this is some kind of great benefit. Not once have I ever worked on a project where the big problem was that everyone was already maxed out coding 8 hours a day.

Figuring out what to actually code and how to do it the right way seemed to always be the real time sink.


>I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

right, but when python came into popularity it's not like we reduced the number of engineers 10 fold, even though it used to take a team 10x as long to write similar functionality in C++.


Software demand skyrocketed because of the WWW, which came out in 1991 just before Python (although Perl, slightly more mature, saw more use in the early days).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: