Hacker News new | past | comments | ask | show | jobs | submit login
AI Stole the Joy of Programming (paulefou.com)
57 points by svalee 48 days ago | hide | past | favorite | 92 comments



I've had the absolute opposite experience, AI has brought back a lot of the joy of programming and building products for me.

I've been using Cursor extensively these past few months, for anything ranging from scaffolding to complex UIs. The trick, I've found, is to treat the AI like I would work with a junior engineer, giving in concrete detailed tasks to accomplish, breaking the problem down myself into manageable chunks. Here are two examples of little word games I've made, each of them took all in all a couple of days to ideate, design and build.

https://7x7.game You're given a grid and you need to make as many words as possible, you can only use the letters in the bottom row. There's complex state management, undo, persistent stats, light/dark modes, animations. About 80-90% of the code was generated and then manually tweaked/refactored.

https://vwls.game Given 4 consonants, you have to generate as many words as possible. This is heavily inspired by Spelling Bee, but with a slightly different game mechanic. One of the challenges was that not all "valid" words are fun, there are a lot of obscure/technical/obsolete words in the dictionary, I used Claude's batch API to filter down the dictionary to words that are commonly known. I then used cursor to generate the code for the UI, with some manual refactoring.

In both cases, having the AI generate the code enabled me to focus on designing the games, both visually and from an interaction perspective. I also chose to manually code some parts myself, because these were fun.

At the end of the day, tools are tools, you can use them however you like, you just need to figure out how they fit in your workflow.


That's because you don't find joy in programming, you find joy in game design. Actually, game design can be automated too just the interfaces are not as advanced as cursor. We'll get there soon.


How does one learn game design? Any pointers for absolute noobs?


Exactly. AI lets me focus on the most interesting part of programming. Coming up with how I want to solve the problem, not wasting time searching docs to find out if a particular function will do what I want and many other tasks I didn't even know were not my favourite before.


For me, a particular kind of certainty is a prerequisite for joyful programming. I too get the most out of the solutioning, not the annoying details that need to be clarified by documentation, but if I don't personally clarify those details, I no longer have a certain kind of baseline confidence in my program; a feeling of what parts are more likely to be the source of bugs and what parts aren't; a feeling of how much I grok each part of my own program. Without that, personally speaking, the joy is drained from the solutioning thereafter.


That’s a big part of programming though, IMO. Always has been.


At least with a Junior Engineer they can learn and grow from your feedback, becoming a more useful member of the team, the generative model on the other hand will not.


Eh, there's actually a solution for this.

Use Composer notebooks to keep a growing markdown document of context you want future versions to remember.

My Cursor today is much better than my Cursor on day 1.


I would imagine the context takes up valuable input tokens that you would otherwise need to use for your request. So you'll run out at some point and then you just have a simple model rather than a skilled engineer.


Can't you just cache the context?


I took to long to write my comment so I didn't see yours - you are exactly describing the feeling I also have when using AI to create things.


> In both cases, having the AI generate the code enabled me to focus on designing the games, both visually and from an interaction perspective. I also chose to manually code some parts myself, because these were fun.

…so not programming?


I don't agree. Copilot/etc is kind of worthless for me, it creates so many issues that I've never bothered to work with it.

AI is awesome for solving issues, asking it questions about code, asking for possible solutions. But maybe I'm just fast at writing code that actually solves the problem, so I don't need an AI to code for me.


Same, I spend way longer debugging and editing AI code than it would have taken me to just write it myself. I don't consider myself a fast typer, for me it's purely due to the inaccuracy of the results.


For me it's like: Oh, that solution is really smart. A few minutes later: Something doesn't work. 15 minutes of bug fixing only to find out the AI solution is nearly impossible to fix. Delete it and code it from scratch.


It's frustrating.

Cody's autocomplete used to work really well for me. Then they switched to DeepSeek. Now I regularly get suggestions that are irrelevant, incomplete, and contain syntax errors.

I'm not sure what it's like these days but I had a similar experience with Copilot a while back.

I wonder if good autocomplete is just too expensive.


Hi there, Cody contributor here—sorry to hear you had a bad experience! In our evals, our DeepSeek variant outperformed previous models and other alternatives. If it's working worse for you know, would be open to sending us some examples/screenshots of poor completions examples? We'd like to incorporate these into our eval set so we can capture a more representative distribution of codebases and how Cody performs!


I can do that. What's the best way to get them to y'all?


Ping community@sourcegraph.com and I'll get a thread going. :)


I think it depends, it's quite good for prosaic code, and when you have a lot to auto-complete and macros are not specific use case.

Other than that, having chat with o1 and sonnet inside the editor is pretty good ngl


It looks like your problem is you ask AI to solve problems instead of actually code extensive functionality.


What does that mean


If you're decent at coding it's trivial to type out a few lines once you know what the problem is, and copilots barely help. It only makes sense for it to generate entire files, huge productivity savings there.


Does it write entire files that works well? Maybe if its very standard functionality you ask for?


It probably varies by environment, all I know is that it works 98% perfectly for me.


If I were a trained professional software engineer who found joy in writing tests and TDD, maybe I'd feel differently, but I write software to help with basic scientific analysis, and ChatGPT has been an absolute game changer for writing tests.

I personally find writing tests to be soul-crushing, boring, work. I never really learned it properly, and when I have a well-documented function, CGPT typically does a decent job making a rough draft. I often have to work on the test function, fix some things, but the final product is way better than the PoS I would have put together: my guess is it has saved me hundreds of hours. I have developed a decent understanding of fixtures, mocking, sharing fixtures across modules, etc, all with the help of ChatGPT. It "understands" my project and how it is organized, and makes suggestions based on this understanding. Yes, it sometimes gets stuck in local minima and I have to kick it out, which can be frustrating. But even that is a learning process, as I often go to SO or other people's code bases to find good examples, and feed them to ChatGPT to get it unstuck.

It's like the ultimate rubber duck paired programming partner. I tell it what I'm working on, and that's intrinsically helpful. But the rubber duck has really good feedback, because it has read the entire internet.

It's made writing tests for my code fun, for the first time ever.

The people I know personally who refuse to use CGPT are typically very good software developers, somewhat arrogant and have a chip on their shoulders, and honestly I think in 20 years we'll look back at them like people who thought the internet was a passing phase in the mid 1990s. I also think many of them don't understand how LLMs work, and how powerful they can be when prompted correctly


Tests are usually soul crushing and boring for the same reason PHP was in 1999 - a complete lack of structure, tooling and separation of concerns made it tedious and difficult.

I find it interesting that when people describe to me how they use LLMs to write code it's either short throwaway scripts or to write the kind of code that would make me retch (e.g. tests stuffed full of horrible mocks, spaghetti boilerplate).


Bear with me here: what if that retch-inducing code works fine? It's generated by an AI, can be understood by an AI (presumably), does what it should, so why should it be palatable for a human? You're not in the loop anymore...


If it remains perfectly bug free until the end of time then yes it's fine /s


And when a bug shows up (as they always do), some AI will fix it.


The difference with the internet is that it was not the people on the inside calling it useless (engineers). It was the business people and others who had no understanding of the technology and possibilities.

In this case it is the opposite, the best ML/software engineers today think this is a passing phase. It's the general population and business people who are claiming it to be revolutionary.

Only time will tell though


There is reasonable debate about their scope and limits, but it's hard to find anyone who understands how LLMS work that thinks they are a passing phase.

The pushback I see is from people who were raised to write everything from scratch, who don't trust the output of LLMS because of "hallucinations" or other crappy outputs. The problem is, the people making these claims are really out of touch with prompt engineering, and how students are currently learning to code with AI in-the-loop (and for basic coding and testing etc for common libraries, LLMs are really, really good at explaining things and writing entry-level code and tests -- this is not arguable: people that are fighting this are graybeards that haven't learned to code at a basic-to-intermediate level in a long time).

A good software developer, with a nose for code smells, will not just accept any old code an LLM produces, you have to use it intelligently, push back on bizarre constructions. Hence, for me, who hates writing tests, it is an amazing tool. If I had an intern or an undergrad who loved grindout out tests I'd use them, but that's basically my LLM at this point (and for the "but ackshually" guys yes obviously you can't use them mindlessly, we are writing code not drawing doodles).


Excellent strawman, good job!


For me, it’s quite the opposite—it brought back the joy of programming.

There are thousands of weather apps in the App Store, but none display rain data exactly the way I’d like to see it. That’s why I’ve long considered writing my own home screen widget to show it exactly as I want.

I hadn’t developed iPhone apps in a few years, so I had no experience with SwiftUI, the Swift Graph framework, or creating widgets. Just two years ago, building an app with a widget from scratch would have taken me a week — to read tutorials, navigate the necessary documentation, get started and solve my beginner bugs. Because of that time investment, I always hesitated to even begin.

Now, I’ve created exactly what I wanted in a single afternoon after work, with the help of AI. To be honest, GitHub Copilot isn’t very helpful for this, though it does speed up repetitive typing. However, using ChatGPT to scaffold the graph code—with me tweaking the parameters—made the process much faster. Since they added search functionality, there’s minimal "hallucination" of APIs, allowing for quick iterations and bringing back that “joy of programming” feeling.


'AI Stole the Joy of Programming' is not quite the title that blog post has.


I assumed from the actual title that there wouldn't be any content worth reading and stopped there. It's funny to call the false HN title "clickbait" but I did click that one and wouldn't have clicked the other.


I mean I clicked on this one, but I would've double clicked the original title. I like a little spice in my life.


We need an AI tool to digest the articles for us to return a signal for click bait.


Probably not even AI if there's a community helping to curate headlines. Sort of what the DeArrow browser extension does with that cesspool of sensational titles, amazed facial expressions and general bullshit that has become YouTube lately. https://dearrow.ajay.app/


When you program for a living, you want the fastest path to creating the best code conforming to your metric of "best." Copilot may or may not be able to get you there, YMMV as they say.

When you program for a hobby, you oftentimes seek to enjoy the route as much or more than reaching the destination. Copilot would be a distraction and an annoyance in this case - unless you're genuinely stuck and then you can use Copilot as a mentor.

It all depends on your context and what you're trying to do.


I've taken the simple solution: if I want to enjoy programming for programming's sake, I turn copilot off. If I want to be careful and understand the problem and its solution in detail, I turn copilot off. If I simply want to get a toy project done and don't care at all about the implementation process, I might leave it on.

I've had an absolutely magical experience with copilot though. I honestly find it a bit strange when others say it has just been bad for them


Copilot does very well when I'm solving the same problem I've solved in the past with different parameters. Parse a CAN network packet (8 bytes so they do weird things like 6 bit counters with 2 bytes in byte 3 and the rest in byte 4) copilot can write that and the tests quickly - we have hundreds of different CAN packets we parse so there is a lot of example code to look at. Everything is just different enough to look like boilerplate while not actually being boilerplate. However when I'm trying to write code that isn't a variation of something I've done many times before copilot is not helpful. It can't complete as much and what does complete is wrong often for style reasons (it would be nice if the function it wants to call existed but it doesn't, or does takes some other parameter that it doesn't know)


It's just the start, it's about to steal the joy of everything in life pretty soon.


This is my big fear about pervasive use of AI. I'm afraid that companies, policy makers, regulators, etc. all start letting AI make important decisions without any human understanding of the reasoning behind the decision and the human puppets hiding behind it with no accountability.

I imagine scenarios where AI could be given complete authority to decide who is hired/fired, who gets medical care, who gets food, who gets utilities (water/electricity/natural gas) to their homes, who gets disaster relief, etc. Quite frightening when you think about it. If AI decided to cancel you (and it had this level of authority) your very existence would be in danger.


I think the scale of applying AI to generate contents will cripple search engine even before your scenarios become true.

I have a hobby with natural photos, and in the last 2 years, I have stopped spending time browsing pages as most of them are garbages generated by AI.

One thing many people hype about generative AI as they believe human also make mistake the same way as AI do, so at worst you have something similar to human mistake. Yes, but the volumes generated by AI are at magnitude bigger, and only a limited amount of human can validate and filter it out. If there is nothing change, this volumes of garbages will surely overflow and there is no way we can differentiate bad and good, fake and authentic contents.


Haven’t books been written about this exact scenario?


I have no idea, but it wouldn't surprise me if there were.


Online comments are about to die. First, we won't be able to tell who is real anymore. People will profit by running huge botnets for advertising and political manipulation. (Inb4 someone says this already happens: AI will keep lowering the bar to enter and evade detection). But a knock on effect is there will be a market for farming fake accounts for later sale to said manipulators.


> But a knock on effect is there will be a market for farming fake accounts for later sale to said manipulators.

We'll lift millions in the "global south" out of poverty by providing the tools to criminals and foreign adversaries that drive demand for cheaply-staffed high-rep social media account farms.

What a time to be alive. What frontiers we are exploring.


It'll be sad but the brain rot from pervasive social media is getting severe, especially in older generations. If this pushes more people to touch grass, it's a net positive.


I totally agree that many online comments are toxic and many people should go touch grass. However I also see this as the next step in a series of humans getting their communications cut off by computers.

Talking to people online is in fact a way that some people get socialization and develop writing skills. If my prognostication is correct and people give up even trying to interact on most websites because "it's not worth starting a conversation only to find out 10 minutes later that they're a bot" or "no one listens to my opinion because that's what all the bots say so they don't think I'm real", the effect is to make the web even more "read-only" where people are discouraged from sharing themselves.


Whether the "sharing themselves" age was a peak or a low is still up for debate.


That’s just because people have been so indoctrinated by capitalism that they only find joy if they’re being worked to the bone. They have forgotten the joy of not doing anything productive.


Strongly disagree. There is an intrinsic pleasure for many people in feeling that they can make a contribution - that their skills, knowledge and ability is valuable and valued


But their contributions are not valued. You can spend a year working on something cool and when you finally show it to the world the most you get is a couple upvotes on Hackernews and people using it for maybe 10-15 minutes and then never again. That’s the reality for many people.


Exactly, that exact drive to feel helpful and needed is why dogs are trainable and men are depressed. It's why the old wolf who can't catch their own food anymore leaves the pack to die alone, cold, and hungry.

That craving is a deeper more primalistic emotion than hunger or thirst for many people (and animals) - boiling it down to "muh capitalism" is just disingenuous.


There are many people who can afford to do nothing. They rarely do, and those who do, are not necessarily happy.


Those who do nothing tend to die early/young. Those who get a hobby of some sort tend to live a normal lifespan.


Why learn to paint or draw if generative ML can create a picture for you?

Why practice writing if generative ML can create a poem or short story for you?

In that regard you're best off just sitting down in front of a screen and consuming content generated by ML.

And from the example in the original blog post the author had the generative ML do the fun stuff in solving the problem and all they did was drudge work cleaning it up and submitting it. Very productive from the company perspective but reminds me a lot of low thought factory processes.


Many many people still play violin, piano, trumpet, guitar even though recordings (in various forms) have been around for more than 100 years.


You didn’t answer your questions.


To me the joy of not doing anything productive means just sitting/laying on the couch consuming content in one form or another. I'm sure it can also mean just sitting and enjoying a coffee, which is fine but I don't find that necessarily enjoyable by itself.

Making art, music, painting, are all creative and productive endevours, so when you're saying to be okay not being productive to me it means to be okay being a couch vegetable. We need the occasional rest but I don't want generative ML to do all the creative and rewarding things.


Consuming content is really just another capitalist activity, you’re producing views, impressions, you’re trading away attention in exchange for some entertainment.

Instead, go outside and consume nature. Consume a sunset, not a TV show.


I know full well the joy of being unproductive because capitalism affords me the ability to do it. It's expensive and requires an amount of agency only afforded by a high income or outside wealth.

Do you really think the authoritarian elites will let the unwashed masses with no income do whatever the hell they want? Can you really say that after COVID lockdowns demonstrated their true colors?

None of the other capitalism-alternatives have historically afforded the kind of luxury you're suggesting either. Quite the opposite, in fact.


Sounds about right. Let's throw more money, power and natural resources at it and see if the scale tips the other way! If not well nothing of value will have been lost right? What's shortening humanity's lifetime in comparison to the potential productivity gains!


Aren't you at least a little bit excited for the potential benefits?


My spouse and I both work on different ends of AI work (spouse, content production, editing, and prompt-library building; me, feeding the things with data).

No, not really. They're useful but not revolutionary for actual productive work, and I'm being generous in dubbing some AI-based products themselves "productive" (I eagerly await the studies that I'm sure the companies building these things will not bother to do, proving that these are cost/benefit better than other approaches they're replacing—I definitely don't consider it certain that they are).

They shine when the fewest shits are given, which is mostly work that didn't need to be done in the first place, and... mass scams/astroturfing/spam. Hooray.

I don't really see this balance tipping much with the general approach the field's pursuing now.


I’m excited about AI technologies that supplement human capability, like driving autonomy and document search.

I’m not excited about AI technologies that replace human capabilities, like pretty much everything that seems to be getting investment dollars these days.

AI-generated art aside (which is mostly terrible), I’m already having a tough time telling what’s real and what’s not online. The thought of this bleeding into daily life in big ways is depressing.

I’m also not thrilled about a generation of people that struggle to write a paragraph of critical or original thought because of AI dependency.

The last act of Up was supposed to be a warning, not an end-goal.


I am not. If AI delivers on its promises, an unprecedented dumbing down of humanity is to follow. If it doesn't, it was all a waste. Even if we win, we lose.


Is there anything AI (or more like AGI) could do for you that would change your opinion? Eternal life, interstellar travel, curing diseases, educating people with superhuman patience and competence would not be worth it?


There's something very "late stage capitalism" about pouring torrents of capital into tools that can replace human ingenuity, artistry and creativity while starving stuff like infrastructure, space travel, etc.


The only times I've use copilot is when I want to execute my creative goals more easily rather than waste mental energy on the boring parts. It's weird and sad that you see it the other way around


Capitalism is about profit. Infrastructure is financed when it brings profit, and ignored when it stops doing that. E.g. railroad boom and the current state of railroads.

When someone figured that space can bring profit too, we got some developments in space travel as well.


[flagged]


A lot of people don't/didn't set out in life to be a hyper-adaptiver worker who puts 110% into the latest thing they can be the best at all the time. They set out to be an average worker who learns a skill and lives a life focused on things outside of being the maximal worker. For these people it's even less about if an LLM can beat them (which, to be honest, for most all of us is or will be true at least for certain tasks) and more about how much an LLM may upset the life path they were in.


> A lot of people don't/didn't set out in life to be a hyper-adaptiver worker who puts 110% into the latest thing they can be the best at all the time

This used to be life for 99% of human history. Don't take the past 30 years lifestyle for granted just because we created government, free market or whatever. Human nature is to be a hyper-adaptiver worker who puts 110% into the latest thing they can be the best at all the time otherwise they die.


Humans never needed to be hyper-adaptive. 80+% of people in the far past were farmers their whole life, just like their parents, and their children, and everyone they associated with. Prior to farming, dumber means of food gathering. You worked your ass off or died but it was never about the latest thing, that's a purely modern invention. Hardly anybody went to "school" or learned a trade, nobody expected job changes every 5 years. You were a dirt poor, uneducated, worn down farmer. The modern invention was the idea we'd have time to be something else (or be a significantly more educated farmer not so concerned with imminent death) and the very recent invention is that you'd need to be multiple something elses over time.


any worn down farmer you are describing as having it easier would change lives with the modern human. They would not complain as much, that's for sure. At least not until they get used to it.


this is absolutely not true. Human history is not the story of hyper adaptive silicon valley disruptors, in fact your completely backwards. Most of human history growth was enormously slow, and only in the past few decades has it really skyrocketed to the point where people become obsolete this quickly. If you were born in 1008 you could be a blacksmith for the next 600 years at least without too much issue.


its not about whether an LLM can do better work than you, its about if it can do cheaper work than you, and it can.


It’ll do both soon.


Your last sentence crosses the line into personal attack.


When we're building something, we don't have all the specs upfront (unless it's simple). I'm learning and adapting as I write more of the project, and at some points I may backtrack or start from scratch. For projects where you have the whole code upfront, I guess you could pass that to an LLM (maybe).

The way I found most success using LLMs is as a partner to ping-pong ideas, to come up with code design, algorithms, and data structures that would fit a particular scenario. Then I'm ignoring its code and writing it to fit the project. The trick is to use the randomness combined with the vast array of information it holds to your advantage - like a supercharged Google.

Regarding my joy of programming, for me it's not even close. I get my joy from the project as a whole, not from snippets of code sprinkled around (sometimes I wish it could - I have hundreds of projects I would like to tackle but they're not worth my time). The only thing I worry about is that the next version would not be accessible to the public or they would cost exorbitant amounts.

edit: for the way I'm using LLMs I found the approach taken by Zed editor to be the best, really recommend it's buffer, easy to copy-p, modify and search (it would be nice to also have divergence from a chat, hopefully in the future)


This is one of the reasons why I don't use genAI for programming purposes. It increases the need to review and correct code I didn't write, which increases the amount of work that I don't enjoy doing.


the start:

ai is a junior engineer you as a senior engineer can coach

the end:

the ai is a senior engineer with a half finished problem you can polish as a junior engineer


Programming stole the joy of programming.

My experience is that like so much else there's an expiry date on the joyful coding.

I gave it another chance with AI but AI is too incompetent, it's more of a creative intern that does badly speed reports than a competent replacement for painstakingly reading documentations and googling.


Interesting example. As programming languages and tooling such as static analysis become more and more advanced I would think memory leaks or mismanagement of memory is going to become a thing of the past. So I would argue that one way or another this was bound to happen.


Damn... I wish I could say this wasn't true. Been trying to lie and say it hasn't but it absolutely has made programming less enjoyable by far... I been trying to convince myself other wise but I am just lying to myself.


Is this tongue-in-cheek? It seems like it is, but I can't tell for sure. Disliking LLMs for coding because they're too helpful is an amusing concept either way.



Enter artisanal programming


Web dev used to be fun and FREE. Now almost every thing cost something. The stupid AI bubble will pass. We need real content, NOT AI BS!!!


I liked the original title more, a bit edgy...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: