The difference with a purely still photograph is that code is a functional encoding of an intention. Code of an LLM could be perfect and still not encode the perfect intention of the product. I’ve seen that in many occasions.
Many people don’t understand what code really is about and think they have a printer toy now and we don’t have to use pencils.
That’s not at all the same thing.
Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.
Some will have to crash and burn their company before they realize that no human at all in the loop is a non sense. Let them touch fire and make up their mind I guess.
> Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.
People are also non deterministic. When I delegate work to team of five or six mid level developers or God forbid outsourced developers, I’m going to have to check and review their work too.
It’s been over a decade that my vision/responsibility could be carried out by just my own two hands and be done on time within 40 hours a week - until LLMs
People are indeed not deterministic. But they are accountable. In the legal sense, of course, but more importantly, in an interpersonal sense.
Perhaps outsourcing is a good analogy. But in that case I'd call it outsourcing without accountability. LLMs feel more like an infinite chain of outsourcing.
As a former tech lead and now staff consultant who leads cloud implementations + app dev, I am ultimately responsible for making sure that projects are done on time, on budget and meets requirements. My manager nor the customer would allow me to say it’s one of my team members fault that something wasn’t done correctly any more than I could say don’t blame me blame Codex.
I’ve said repeatedly over the past couple of days that if a web component was done by someone else, it might as well have been created by Claude, I haven’t done web development in a decade. If something isn’t right or I need modifications I’m going to either have to Slack the web developer or type a message to Claude.
Ofc people are non deterministic. But usually we expect machines to be. That’s why we trust them blindly and don’t check the calculations. We review people’s work all the time though.
Here people will stop review machine LLM code as it’s kind of a source of truth like in other areas. That’s my point, reviewing code takes time and even more time when no human wrote it. It’s a dangerous path to stop reviews because of trust in the machine now that the machine is just kind of like humans, non deterministic.
No one who has any knowledge or who has ever used an LLM expects determinism.
And there are no computer professionals who haven’t heard about hallucinations.
Reviewing whether the code meets requirements through manual and automated tests - and that’s all I cared about when I had a team of 8 under me - is the same regardless. I wasn’t checking whether John used a for loop or while loop in between my customer meetings and meetings with the CTO. I definitely wasn’t checking the SOQL (not a typo) of the Salesforce consultants we hired. I was testing inputs and outputs and UX.
Having a team of 8 people producing code is manageable. Having an AI with 8 agents that write code all day long is not the same volume it can generate more code in a day that one person can review in a week.
What you say is that, product teams will prompt what they want to a framework, the framework will take care of spec analysis, development, reviews, compliance with spec. Product teams with QA will make sure the delivery is functionally correct.
No humans need to make sure of anything code related.
What we don’t know yet is, does AI will still produce solid code trough the years because it’s all statistical analysis and with the volume of millions of loc, refactoring needed, data migrations etc what will happen ?
For context, I just started using coding agents - codex CLI and Claude code in October. Once I saw that you had to be billed by use, I’m not using my own money for it when it’s for a company.
Two things changed - Codex CLI now lets you use it with your $20 a month subscription and I have never run into quota issues with it and my employer signed up for the enterprise vs of Claude and we each have an $800 a month allowances
My argument though is “why should I care about the code?” for the most part. If I were outsourcing a project or delegating it to a team lead, I would be asking high level architectural, security and scalability questions.
AI generated the code, AI maintains the code. I am concerned about abstractions and architecture.
You shouldn’t have to maintain or refactor “millions of lines of code”, if your code is well modularized with clean interfaces, making a change for $x7 may mean making a change for $x1…$x6. But you still should be working locally in one module at the time. You should do the same for the benefit of coders. Heck my little 5 week project has three independently deployable repos in a root folder. My root Agents file just has a summary of how all three relate via a clean interface.
In the project I am working on now, besides “does it meet the requirements”, I care about security, scalability, concurrency, user experience for the end user, user experience for the operations folks when they need to make config changes, and user experience for any developers who have to make changes long after I’m off this project. I haven’t looked at a single line of code - besides the CloudFormation templates. But I can answer any architectural question about any of it. The architecture and abstractions were designed by me and dictated to the agents
On this particular project, on the coding level, there is absolutely nothing that application code like this can do that could be insecure except hypothetically embed AWS credentials into the code. But it can’t do that either since it doesn’t have access to it [1].
In this case security posture comes from the architecture - S3 block public access, well scoped IAM roles, not running “in a VPC”. Things I am checking in the infrastructure as code and I was very specific about.
The user experience has to come from design and checking manually.
I mentioned earlier that my first stab it scaled poorly. This was caused by my design and I suspected it would beforehand. But building the first version was so fast because of AI tools, I felt no pain in going with my more architecturally complicated plan B and throwing the first version away. I wouldn’t have known that by looking at the code. The code was fine it was the underlying AWS service. I could only know that by throwing 100K documents at it instead of 1000.
I designed a concurrent locking mechanism that had a subtle flaw. Throwing the code into ChatGPT into thinking mode, it immediately found it. I might have been better off just to tell the coding agents “design a locking mechanism for $x” instead of detailing it.
Even maintainability was helped because I knew I or anyone else who touched it was probably going to be using an LLM. From the get go I threw the initial contract, the discovery sessions transcripts, the design diagrams, the review of the design diagrams, my project plan and breakdown into ChatGPT and told it to render a detailed markdown file of everything - that was the beginning of my AGENTS.md file.
I asked both Codex and Claude to log everything I was doing and my decisions into separate markdown files.
Any new developer could come into my repo, fire up Claude and it wouldn’t just know what was coded, it would have full context of the project from the initial contract through to the delivery
[1] code running on AWS never explicitly has to worry about AWS credentials , the SDKs can find the information by themselves by using the credentials of the IAM role attached to the EC2 instance, Lambda, Docker container, etc.
Even locally you should be getting temporary credentials that are assigned to environment variables that the SDK retrieved automatically.
Okay - and the person ultimately leading the team is still responsibility for it whether you are delegating to more junior developers or AI. You’re still reviewing someone else’s code based on your specs
In this case why can’t other agents just automate your job completely ? They are capable of that. What do you bring in the process of still doing manual organization ?
I still have to tell it what to do, and often how to do it. I manage its external memory and guidelines, and review implementation plans. I’m still heavily involved in software design and test coverage.
AI is not capable yet of automating my job completely – I anticipate this will happen within two years, maybe even this year (I’m an ML researcher).
No, I mean that my job in its current form – as an ML researcher with a phd and 15 years of experience - will be completely automated within two years.
Is the progress of LLMs moving up abstraction layers inevitable as they gather more data from each layer? First, we fed LLMs raw text and code and now they are gathering our interactions with the LLM regarding generated code. It seems like you could then use the interactions to make a LLM that is good at prompting and fixing another LLMs generated code. Then its on to the next abstraction layer.
What you described makes sense, and it's just one of the things to try. There are lots of other research directions: online learning, more efficient learning, better loss/reward functions, better world models from training on Youtube/VR simulations/robots acting in real world, better imitation learning, curriculum learning, etc. There will undoubtedly be architectural improvements, hardware improvements, longer context windows, insights from neuroscience, etc. There is still so much to research. And there are more AI researchers now than ever. Plus current AI models already make us (AI researchers) so much more productive. But even if absolutely no further progress is made in AI research, and foundational model development stops today, there's so much improvement to be made in the tooling around the models: agentic frameworks, external memory management, better online search, better user interactions, etc. The whole LLM field is barely 5 years old.
So your assumption is that it will ultimately be the users of software themselves who will throw some every day language at an AI and it will reliably generate something that meets those users' intuitive expectations?
Yes, it will be at least as reliable as an average software engineer at an average company (probably more reliable than that), or at least as reliable as a self-driving car where a user says get me to this address, and the car does it better (statistically) than an average human driver.
I think this could work for some tasks but not for others.
We didn't invent formal languages to give commands to computers. We invented them as a tool for thinking and communicating things that are hard to express in natural language.
I doubt that we will stop thinking and I doubt that it will ever be efficient to specify tasks purely in terms of natural language.
One of my first jobs as a software engineer was for a bank (~30 years ago). This bank manager wasn't a man of many words. He just handed us an Excel sheet as a specification for what he wanted us to implement.
My job right now is to translate natural English statements from my bosses/colleagues into natural English instructions for Claude. Yes, it takes skill and experience to do this effectively. But I don't see any reasons Gemini 4, Opus 5 or GPT-6 won't be able to do this just as well as I do.
I have enough savings for a few years, so I might just move to a lower COL area, and wait it out. Hopefully after the initial chaos period things will improve.
For someone at your position with your experience it’s quite depressing that your job is going to be automated. I feel quite anxious when I see young generations in my country that say themselves they are lazy about learning new things. The next generation will be useless to capitalist societies, in a sense that they won’t be able to bring value through administrative or white collar work. I hope some areas of the industry will move slowly toward AI
Drop AI, open a basic editor and write everything by hand without asking anything to AI. Do searches by yourself. That’s how world worked for decades pre 2022.
Debug by your own, without asking anything to AI as well.
If you really can't drop the AI, ask it stuff when you are really blocked, but ask it not to provide code (you need to write it to understand and learn), but I suspect you'd be better served by a regular web search and by reading tutorials written by human beings who crafted and optimized the writting for pedagogy.
It will probably feel slow and tedious, but that's actually a good, mpre efficient use of your time.
At this point of your journey, where your goal is above all to learn, I doubt the AI works in your interest. It's already unclear it provides long term productivity boost to people who are a bit more experienced but still need to improve their craft.
You don't need to optimize the time it takes to build something. You are the one to "optimize".
AI has changed nothing in terms of learning to program, it's every bit as complicated as it ever were (well languages are better now, compared to the 1960s, but still hard).
Becoming an expert takes years, if not decades. If someone has only started programming in 2025, then they still have a long way to go. I get that seeing others move fast with AI can be discouraging, and the only advise I can give is "ignore them". In fact, ignore everyone attempting to push LLMs upon you. If your learning to program, you're not really ready for AI assisted coding, wait ten years.
There's no really satisfying answer other than: Keep at it, you're probably doing better than you think, but it will take years.
> AI has changed nothing in terms of learning to program
In terms of what you should be doing when you learn to program, I fully agree.
In terms of the effects AI has on the activity of learning to program, I think it has: it has made it very tempting (and affordable - so far) to just make the AI build and even adapt the simple stuff for you that you'd otherwise be building and adapting by yourself. I suppose it can even give you the false feeling you understand the stuff it has built for you by reading the generated code. But this makes you never go through the critical learning steps (trial and error, hard thinking about the thing, notice what you are missing, etc).
We already had the possibility to run web searches and copy paste publicly available stuff, but I think that this came with more friction, and the automated adaptation aspect was not there, you've had to do it by yourself. I think Gen AI has made it way easier to be lazy in the learning and it's a trap.
But from the rest of your comment it seems we mostly agree.
Some codebase are a logical mess and have bad names as well. Sometimes Claude is wrong because the semantics of our legacy codebase doesn’t makes sense. Sometimes it find problems at the wrong places because of that.
Many non-dev people think LLM does the thinking and the typing. That’s where the misconception comes from as regards to LLM replacing completely developers I guess
> That’s where the misconception comes from as regards to LLM replacing completely developers I guess
The misconception also arises because Ai companies use the word thinking and everything else too which is what the general population says and then this just gets caught on by Engineers too.
When we say reason/think/etc. all the hype created by AI definitely gets a boost imo.
Do we say these for a lack of a better term or was it intentional that we say reason/think?
I second that, after more than a decade of all quoted entertainment I started to read books again and that’s really refreshing. Just choose a good book and no bad acting or directing gets in the way. I read sci-fi books and I wondered if I ever could appreciate a movie version of it because they are so hard to get right with all variables that can make a sci-fi movie bad. I can read everywhere, no battery needed or screen pulsing in my eyes.
It’s one of the best form of entertainment because it’s getting the brain engaged with creativity in ways TV or movies can’t.
In what way does MacOS feels like garbage ? I use it everyday on a +5y MacBook and it’s an absolute blast. Powered on for weeks without reboot, 3x4K 32inch screens, hundred of chrome pages and apps opened and it’s all smooth. Ofc I don’t even hear a fan but the software is amazing for me. It all works, all the time.
I'm someone else, but I also feel like macOS is rubbish.
On linux, if I get a kernel panic, I can dig into the kernel, add debug logs, understand what's going on, and potentially fix it. If I want to swap to a scrolling window manager like niri, I can.
On macOS, it's a black box and any radar I file with apple vanishes in a black hole never to be seen again. There's hardly any customization, and the default UX is horribly undiscoverable and can't easily be driven with just a keyboard.
As a hacker, the above makes macOS garbage, and I'd assume anyone on hacker news would understand that desire to be able to understand and hack on the software you use.
I also like to hack things but you understand that what drive Mac sales are not people who’d like to hack the system but regular people on a massive scale. Linux does not have this problem because there isn’t the same kind of economics involved in year-round salaries. So I won’t consider trash an OS who’s main target by far is not hackers, even though there is still some margin for some customization.
Your point still stands that for you the OS is garbage. But you’re probably not the main user they have in mind when they develop the OS
> Change for change's sake, needless shifting in settings/config menus. Weird "we tried to make this similar to mobile" themes in some places but not others. Overly complex os navigation
That is garbage ? Changing the UI of settings ? MacOS navigation has been the same for a decade. Who uses the settings daily ? I don’t even open settings once a month. Every single piece of UI evolve and must evolve. The ones that don’t are stuck in the past and belongs to a museum. I don’t see what complex and thus garbage.
Functionality that you use once a year or less is _exactly_ the sort of thing that needs to be consistent. It's literally the last place that an innovation should arrive, after that innovation has been fully developed and battle-tested.
So among the hundreds of things that makes an OS garbage, you’ll likely place as top criteria an UI you use once a year. Windows settings has been the worse of the worse of all UI settings ever. That’s my criteria then, macOS is luxury in any kind of form compared to the maze and hidden UI that make windows.
Also anecdote but most low-tech people I know are using chatGPT like google search and will never pay for it. Maybe that’s why chatGPT Ads will work beautifully with them
Everyone says “advertise!” Like it’s a magic bullet. The tech industry is littered with companies that have high traffic and couldn’t figure out how to monetize via advertising. Yahoo being the canonical example.
Besides the cost of serving an LLM request - using someone else’s infrastructure and someone else’s search engine is magnitudes higher than a Google search.
Besides defaults matter. Google is the default search engine for every mobile phone outside of China and the three browsers with the most market share
> Everyone says “advertise!” Like it’s a magic bullet. The tech industry is littered with companies that have high traffic and couldn’t figure out how to monetize via advertising. Yahoo being the canonical example.
That is obviously because they can't figure out any other value of the product that's sellable.
Ads per se aren’t necessarily a problem. It’s good to be able to discover companies et products.
The problem is bad ads, ads that lie, ads for fake products or unfair ads. Also, too much ads is also a problem
People just enjoy and value the process of making music.
Just like you could enjoy the process of drawing, or doing sports.
Given the amount of talented musicians that do not live off their art, most of the time they value the process and the result and if other people like it too and pay for it it’s even better. Most music is not produced to give emotions to other but to the musician. It happens we share the same emotions that the musician sometimes.
So if you remove the process or devalue it, it’s touching the artist in its heart and values because most of them worked on their craft for years.
One person using more software to "make" music does not remove the process or devalue music for another person who wants to use less software to make music. Replace music with anything in this reasoning.
Actually with AI the music is made for you. I don’t have to learn how to play the piano the guitar or anything. I just prompt what kind of instrument I want to. Is that still « making music » ? Idk, for me it’s not the same. In the end I’m not a musician. I just enjoy music. But I can understand that the reality of some people is different from mine as regards to what is « making music ». My view or use of AI does not invalidates theirs.
Absolutely and I hope you understand my point as well. Actually I’ve never been able to learn an instrument and I always wanted to make music. I’m all in to make music without having to learn anything complicated. Other people might not have the same definition of mine or like what I could produce without their craft.
Exactly right. It's like arguing there is only one way to make food for enjoyment. It's pure snobbery to proclaim there's only one proper way to do X thing along these lines. Making art is just the same, there is no right way to do it.
The HN crowd wants everybody sitting at home on UBI suffering trying to be creative. It's like arguing for hand washing clothes to get that full, proper, drawn-out, brain smashing experience.
Now sit at home and be a good boy, take that UBI, create and be productive - but don't make it too easy, don't you dare use AI, bleed for that UBI.
>Now sit at home and be a good boy, take that UBI, create and be productive
Honestly i prefer that listening marketing bro's on linkedin posting about how AI means X is finished and everyone who learned X needs to pay for their webinar on writing prompts.
I did something similar with Claude code, I did not write a single line of code and it’s hosted on cloudflare workers. With the free tier it’s enough for one person (and I feel safer to own and host my private data). Works beautifully.
Your website does not show how it works, no screenshots, it would be better with it
Same. I vibe-coded a real-time notepad thing with optional E2E with CC over a weekend. Not going to plug it unless someone asks me to, just pointing out how easy this is nowadays.
Some will have to crash and burn their company before they realize that no human at all in the loop is a non sense. Let them touch fire and make up their mind I guess.
reply