You'd need massive networking improvements too. Telling someone "try next to the stairs, the cellular signal's better there" is an example I saw yesterday (it was a basement level), and that's not uncommon in my experience. You have both obstacles (underground levels, tunnels, urban canyons, extra thick walls, underwater) and distance (large expanses with no signal in the middle of nowhere); satellites help with the later but not with the former. Local computing with no network dependencies works everywhere, as long as you have power.
Is it actually the case that local computation on mobile devices is much more expensive than running the radios? I was just the impression that peripherals like the speakers, real radios, and display often burn up much more power than local manipulation of bits.
You are definitely correct ni that the screen takes a bit chunk of the power, but it is my understanding that the cpu is taking the most. This is why you cannot run x86 systems on battery power very efficiently.
Look at it this way. Older laptops on x86 have the same screens as the newer arm based laptops, but the arm laptops have significantly more battery life using the same battery tech. This is definitely a sign that the processor is the biggest user of power in the system.
A good sized readable font is a bit funny though. That line of text is very small on my phone and I presume most people today browse on phones. I can also imagine some phones with a higher resolution show this text even smaller making it completely illegible.
I'm all about minimalism so long as it doesn't hurt UX but these examples of minimalism always end up going too far. It's like it becomes a competition for minimalists - "look how much more minimal I am than you! Therefore I'm a better minimalist."
There are other styling bugs: on an iPhone, there’s no vertical space between a subhead the content below as well, and there’s too much space between a para and a bulleted list.
Why not be minimal in implementation, too, and just use default styling? Don’t fear Times New Roman…
Interesting considering the Pixel 6 has 403 PPI and my phone Nothing Phone (1) has 402 PPI. So almost identical in that regard. Maybe it's a browser difference (Firefox here).
This page was created almost twelve years ago. Mobile phones were not the main way to consume pages at that time for most of the world. I think even meta viewport did not yet exist back then.
Apparently the iPhone 11 is 324 PPI whilst my Nothing Phone (1) is 402 PPI. Probably why it looks bigger to you, but I still wonder whether it would look good on your phone to a person with less than good eyesight. In any case, readability shouldn't depend on the user device. There are plenty of ways to get responsive font sizes in CSS.
I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts. Because my personal experience has involved a lot of that in one of the companies listed in the study.
Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
I am curious about this also. I have now seen multiple PR's that I had to review that a method was clearly completely modified by AI with no good reason and when asked why something was changed we just got silence. Not exaggerating, literal silence and then trying to ignore the question and explain the thing we were first asking them to do. Clearly having no idea what is actually in this PR.
This was done because we asked for a minor change to be done (talking maybe 5 lines of code) and tested. So now not only are we dealing with new debt, we are dealing with code that no one can explain why it was completely changed (and some of the changes were changes for the sake of change), and we are dealing with those of us that manage this code now looking at completely foreign code.
I keep seeing this with people that are using these tools and they are not higher level engineers. We finally got to the point of denying these PR's and saying to go back and do it again. Loosing any of the time that was theoretically gained from doing it in the first place.
Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.
> Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.
It is worse than that. We're all maintaining in our heads the mental sand castle that is the system the code base represents. The abuse of the autocoder erodes that sand castle because the intentions of the changes, which are crucial for mentally updating the sand castle, are not communicated (because they are unknowable). This is same thing with poor commit messages, or poor documentation around requirements/business processes. With enough erosion, plus expected turn over in staff, the sand castle is actually gone.
Easy: exclude developers who try it. Learning--be it a new codebase, a new programming language, a new database--takes time. You're not going to be as productive until you learn how. That's fine! Cheating on your homework with an LLM should not be something we celebrate, though, because the learner will never become productive that way, and they won't understand the code they're submitting for review.
The truth is that the tools are actually quite good already. If you know what you are doing they will 10-20x your productivity.
Ultimately not adopting them will religate you to the same fate as assembly programmers. Sure there are place for it, but you won't be able to get near as much functionally done in the same amount of time and there won't be as much demand for it.
Do you agree that the brain-memory activity of writing code and reading someone else’s code is totally different ?
The sand castle analogy is still valid here because once you have a x10 productivity or worse a x20 one, there is no way you can deeply understand the things the same way than if you wrote it from scratch. Without spending a considerable amount of time and getting productivity down, the understanding is not the same.
If no one is responsible because it’s crap software and you won’t be around enough time to bear responsibility… it’s ok I guess ?
if you are seeing 900% productivity gains, why did these controlled experiments only find 28%, mostly among programmers who don't know what they're doing? and only 8–10% among programmers who did? do you have any hypotheses?
i suspect you are seeing 900% productivity gains on certain narrow tasks (like greenfield prototype-quality code using apis you aren't familiar with) and incorrectly extrapolating to programming as a whole
I think you’re probably right, though I fear what it will mean for software quality. The transition from assembly to high level languages was about making it easier to both write and to understand code. AI really just accelerates writing with no advancement in legibility.
When using code assist, I've occasionally found some perplexing changes to my code I didn't remember making (and wouldn't have made). Can be pretty frustrating.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
Thank you, this says what I have been struggling to describe.
The day I lost part of my soul was when I asked a dev if I could give them feedback on a DB schema, they said yes, and then cut me off a few minutes in with, “yeah, I don’t really care [about X].” You don’t care? I’m telling you as the SME for this exactly what can be improved, how to do it, and why you should do so, but you don’t care. Cool.
Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out. I’m not even talking about doing micro-benchmarks (though you should…), I’m talking about dead-simple stuff like “maybe use this data structure instead of that one.”
In a similar vein, some days I feel like a human link generator into e.g. postgres or kafka documentation. When docs are that clear, refined, and just damn good but it seems like nobody is willing to actually read them closely enough to "get it" it's just a very depressing and demotivating experience. If I could never again have to explain what a transaction isolation level is or why calling kafka a "queue" makes no sense at all I'd probably live an extra decade.
At the root of it, there's a profound arrogance in putting someone else in a position where they are compelled to tell you you're wrong[1]. Curious, careful people don't do this very often because they are aware of the limits of their knowledge and when they don't know something they go find it out. Unfortunately this is surprisingly rare.
[1] to be clear, I'm speaking here as someone who has been guilty of this before, now regrets it, and hopes to never do it again.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
They are/will be the management's darling because they too are all about delivering without any interest in technology either.
Well designed technology isn't seen as foundation anymore; it is merely a tool to just keep the machine running. If parts of the machine are being damaged by the lack of judgement in the process, that shouldn't come in the way of this year's bonus; it'll be something to worry about in the next financial year. Nobody knows whats going to happen in the long-term anyway, make hay while the sun shines.
Its been a while since my undergrad (>10 yrs) but many of my peers were majoring in CS or EE/CE because of the money, at the time I thought it was a bit depressing as well.
With a few more years under my belt I realized theres nothing wrong with doing good work and providing yourself/your family a decent living. Not everyone needs the passion in their field to become among the best or become a "10x"er to contribute. We all have different passions, but we all need to pay the bills.
Yeah, I think that the quality of work (skill + conscientiousness), and the motivation for doing the work, are two separate things.
Off-the-cuff, three groups:
1. There are people who are motivated by having a solid income, yet they take the professionalism seriously, and do skilled rock-solid work, 9-5. I'd be happy to work with these people.
2. There are other people who are motivated by having a solid or more-than-solid income, and (regardless of skill level), it's non-stop sprint performance art, gaming promotion metrics, resume-driven development, practicing Leetcode, and hopping at the next opportunity regardless of where that leaves the project and team.
3. Then there's those weirdos who are motivated by something about the work itself, and would be doing it even if it didn't pay well. Over the years, these people spend so much time and energy on the something, that they tend to develop more and stronger skills than the others. I'd be happy to work with these people, so long as they can also be professional (including rolling up sleeves for the non-fun parts), or amenable to learning to be professional.
Half-joke: The potential of group #3 is threatening to sharp-elbowed group #2, so group #2 neutralizes them via frat gatekeeping tactics (yeah-but-what-school-did-you-go-to snobbery, Leetcode shibboleth for nothing but whether you rehearsed Leetcode rituals, cliques, culture fit, etc.).
Startups might do well to have a mix of #3 and #1, and to stay far away from #2. But startups -- especially the last decade-plus of too many growth investment scams -- are often run by affluent people who grew up being taught #2 skills (for how you game your way into prestigious school, aggressively self-interested networking and promoting yourself, etc.).
#3 is the old school hacker, "a high-powered mutant of some kind never even considered for mass production. Too weird to live, and too rare to die" :-)
Blend of #1 & #3 here. Without the pay, I don't think I'd muster the patience for dealing with all the non-programming BS that comes with the job. So I'd have found something else with decent pay, and preferably not an office job. Sometimes I wish I'd taken that other path. I have responsibilities outside work, so I'm rarely putting in more than 40 hours. Prior to family commitments, I had side projects and did enough coding outside work. I still miss working on hobby video game projects. But I do take the professionalism seriously and will do the dirty work that has to be done, even if means cozying up to group #2 to make things happen for the sake of the project.
The hardest part of any job I’ve had is doing the not so fun parts
(meetings, keeping up with emails, solidly finishing work before moving on to something new)
As I progressed, I've learned I am more valuable for the company working on things I am interested in. Delegate the boring stuff to people that don't care. If it is critical to get it done right, do it yourself.
There’s certainly nothing wrong with enjoying the high pay, no – I definitely do. But yeah, it’s upsetting to find out how few people care. Even moreso when they double down and say that you shouldn’t care either, because it’s been abstracted away, blah blah blah. Who do you think is going to continue creating these abstractions for you?
I get the pragmatism argument, but I would like to think certain professions should hold themselves to a higher standard. Doctors, lawyers, and engineers have a duty to society IMO that runs counter to a “just mail it in to cash a paycheck” mentality. I guess it comes down to whether you consider software developers to be that same kind of engineer. Certainly I don’t want safety critical software engineers to have that cavalier attitude (although I’ve seen it).
...Someone else who thinks of actual web applications as an abstraction.
In the olden days, we used to throw it over to "Ops" and say, "your problem now."
And Junior developers have always been overwhelmed with the details and under pressure to deliver enough to keep their job. None of this is new! I'm a graybeard now, but I remember seniors having the same complaints back then. "Kids these days" never gets old.
I get what you’re saying, but at the same time I feel like I encounter the real world results of this erosion constantly. While we have more software than ever, it all just kind of feels janky these days. I encounter errors in places I never had before, doing simple things. The other day I was doing a simple copy and paste operation (it was some basic csv formatted text from vs code to excel iirc) and I encountered a Windows (not excel or vs code) error prompt that my clipboard data had been lost in the time it took me to Alt+Tab and Ctrl+v, something I’ve been doing ~daily for 3 decades without any issues.
I’m more of a solo full stack dev and don’t really have first hand experience building software at scale and the process it takes to manage a codebase the size of the Windows OS, but these are the kinds of issues I see regularly these days and wouldn’t in the past. I also use macOS daily for almost as long and the Apple software has really tanked in terms of quality, I hit bugs and unexpected errors regularly. I generally don’t use their software (Safari, Mail, etc) when I can avoid it. Also have to admit lack of features is a big issue for me on their software.
>Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out.
Similarly Docker is an amazing technology, yet it enabled the dependency tower of babels that we have today. It enabled developers that don't care about cleaning up their depencies.
Kubernetes is amazing technology, yet it enabled the developers that don't care to ship applications that constantly crash, but who cares, kubernetes will automatically restart everything.
Cloud and now AI are similar enabler technologies. They could be used for good, but there are too many people that just don't care.
The fine art of industry is building more and more elaborate complicated systems atop things someone deeply cares about to be used by those who don't have to deeply care.
How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?
We're just watching the bar of "Don't need to care because a reliable system exists" move through something we know and care about in our lifetimes. Progress is great to watch in action.
> How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?
hopefully some of those that did a computer science degree?
Disclaimer: I work at a company who sells coding AI (among many other things).
We use it internally and the technical debt is an enormous threat that IMO hasn't been properly gauged.
It's very very useful to carpet bomb code with APIs and patterns you're not familiar with, but it also leads to insane amounts of code duplication and unwieldy boilerplate if you're not careful, because:
1. One of the two big bias of the models is the fact that the training data is StackOverflow-type training data, which are examples and don't take context and constraints into account.
2. The other is the existing codebase, and it tends to copy/repeat things instead of suggesting you to refactor.
The first is mitigated by, well, doing your job and reviewing/editing what the LLM spat out.
The second can only be mitigated once diffs/commit history become part of the training data, and that's a much harder dataset to handle and tag, as some changes are good (refactorings) but other might be not (bugs that get corrected in subsequent commits) and no clear distinction as commit messages are effectively lies (nobody ever writes: bug introduced).
Not only that, merges/rebases/squashes alter/remove/add spurious meanings to the history, making everything blurrier.
Consider myself very fortunate to have lived long enough that I'm reading a thread where the subject is the quality of the code generated by software. Decades of keeping that lollypop ready to be given, and now look where we are!
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
Bingo, this, so much this. Every dev i know who loves AI stuff was a dev that I had very little technical respect for pre AI. They got some stuff done but there was no craft or quality to it.
For what it's worth, a (former) team mate who was one of the more enthusiastic adopters of gen AI at the time was in fact a pretty good developer who knew his stuff and wrote good code. He was also big on delivering and productivity.
In terms of directly generating technical content, I think he mostly used gen AI for more mechanical stuff such as drafting data schemas or class structures, or for converting this or that to JSON, and perhaps not so much for generating actual program code. Maybe there's a difference to someone who likes to have lots of program logic generated.
I have certainly used it for different mechanical things. I have copilot, pay for gpt4o etc.
I do think there is a difference between a skilled engineer using it for the mechanical things, and an engineer that OFFLOADS thinking/designing to it.
Theres nuance everywhere, but my original comment was definitely implying the people that attempt to lean on it very hard for their core work.
If it were not for all the things in your profile, I would aver that the only devs I know that think otherwise of their coding abilities were students at the time.
Hm… I think it’s fair to say that as a learning tool, when you are not familiar with the domain, coding assistants are the extremely valuable for everyone.
I wrote a kotlin idea plugin in a day; I’ve never used kotlin before and the jetbrains ui framework is a dogs breakfast of obscure edge cases.
I had no skills in this area, so I could happily lean into the assistance provided to get the job done. And it got done.
…but, I don’t use coding assistants day to day in languages I’m very familiar with: because they’re flat out bad compared to what I can do by hand, myself.
Even using Python, generated code is often subtly wrong and it takes more time to make sure it is correct than to do it by hand.
…now, I would assume that a professional kotlin developer would look at my plugin and go: that’s a heap of garbage, you won’t be able to upgrade that when a new version comes out (turns out, they’re right).
So, despite being a (I hope) competent programmer I have three observations:
1) the code I built worked, but was an unmaintainable mess.
2) it only took a day, so it doesn’t matter if I throw it away and build the next one from scratch.
3) There are extremely limited domains where that’s true, and I personally find myself leaning away from LLM anything where maintenance is a long term goal.
So, the point here is not that developers are good/bad:
It’s the LLM generated code is bad.
It is bad.
It is the sort of quick, rubbish prototyping code that often ends up in production…
…and then gets an expensive rewrite later, if it does the job.
The point is that if you’re in the latter phase of working on a project that is not throw away…
You know the saying.
Betty had a bit of bitter butter, so she mixed the bitter butter with the better butter.
.. the exact same content for a screensaver with Todd Rundgren in 1987 on the then-new color Apple Macintosh II in Sausalito, California. A different screensaver called "Flow Fazer" was more popular and sold many copies. The rival at the time was "After Dark" .. whose founder had a PhD in physics from UC Berkeley but also turned out to be independently wealthy, and then one of the wealthiest men in the Bay Area after the dot-com boom.
I am not necessarily arguing against GenAI. I am sure it will have somewhat similar effects to how the explosion in popularity of garbage collected languages et al had on software back in the 90s.
More stuff will get done, the barrier of entry will be lower etc.
The craft of programming took a significant quality/care hit when it transitioned from "only people who care enough to learn the ins and outs of memory management can feasibly do this" to "now anyone with a technical brain and a business use case can do it". Which makes sense, the code was no longer the point.
The C++ devs rightly felt superior to the new java devs in the narrow niche of "ability to craft code." But that feeling doesn't move the needle business wise in the vast majority of circumstances. Which is always the schism between large technology leaps.
Basically, the argument of "its worse" is not WRONG. Just, the same as it did not really matter in the mid 90s. Does not matter as much now, compared to the ability to "just get something that kinda works."
I mean... in a way yes. The status of being someone who cares about the craft of programming.
In the scheme of things however, that status hardly matters compared to the "ability to get something shipped quickly" which is what the vast majority of people are paid to do.
So while I might judge those people for not meeting my personal standards or bar. In many cases that does not actually matter. They got something out there, thats all that matters.
SWE has a huge draw because frankly it's not that hard to learn programming, and the bar to clear in order to land a $100-120k work-from-home salary is pretty low. I know more than a few people who career hopped into software engineering after a lackluster non-tech career (that they paid through the nose to get a degree in, but were still making $70k after 5 years). By and large these people seem to just not be "into it", and like you said are more about delivering than actually making good products/services.
However, it does look like LLM's are racing to make these junior devs unnecessary.
> However, it does look like LLM's are racing to make these junior devs unnecessary.
The main utility of "junior devs" (regardless of age) is that they can serve as an interface to non-technical business "users". Give them the right tools, and their value will be similar to good business controllers or similar in the org.
A salary of $100-$150k is really low for someone who is really a competent developer. It's kept down by those "junior devs" (of all ages) that apply for the same jobs.
Both kinds of developers will be required until companies use AI in most of those roles, including the controllers, the developers and the business side.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
I found this too. But I also found the opposite, including here on HN; people who are interested in technology have almost an aversion against using AI. I personally love tech and I would and do write software for fun, but even that is objectively more fun for me with AI. It makes me far more productive (very much more than what the article states) and, more importantly, it removes the procrastination; whenever I am stuck or procrastinating getting to work or start, I start talking with Aider and before I know it, another task was done that I probably wouldn't have done that day without.
That way I now launch bi weekly open and closed source projects while before that would take months to years. And the cost of having this team of fast experienced devs sitting with me is max a few $ per day.
> people who are interested in technology have almost an aversion against using AI
Personally, I don't use LLMs. But I don't mind people using them as interactive search engines or code/text manipulations as long as they're aware of the hallucination risks and took care of what they're copying into the project. My reasons for it is mostly that I'm a journey guy, not a destination guy. And I love reading books and manuals as they give me an extensive knowledge map. Using LLMs feels like taking guidance from someone who has not ventured 1km outside their village, but heard descriptions from passersby. Too much vigilance required for the occasional good stuff.
And the truth is, there are a lot of great books and manuals out there. And while they teach you how to do stuff, they often teach you why you should not do it. I strongly doubt Copilot imparting architectural and technical reminders alongside the code.
For my never finishing side projects I am too; I enjoy my weekends tinkering on the 'final database' system I have been building in CL for over a decade and will probably never really 'finish'. But to make money, I launch things fast and promote them; AI makes that far easier.
Especially for parts like fronted that I despise; I find 0 pleasure in working with css magic that even seasoned frontenders have to try/fail in a loop to create. I let Sonnet just struggle until it's good enough instead of me having to do that annoying chore; then I ask Aider to attach it to the backend and done.
Yeah. There is a psychological benefit to using AI that I find very beneficial. A lot of tasks that I would have avoided or wasted time doing task avoidance on suddenly become tractable. I think Simon Willison said something similar.
Are you just working on personal projects or in a shared codebase with tens or hundreds of other devs? If the latter, how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt? I've gotten a lot of mileage in my career so far by paying attention when something felt tedious or mundane, because that's a signal you need some combination of refactoring, tooling, or automation. If instead you just lean on an LLM to brute-force your way through, sure, that accomplishes the short term goal of shipping your agile sprint deliverable or whatever, but what of the long term cost?
> Are you just working on personal projects or in a shared codebase with tens or hundreds of other devs?
Like - I presume almost everyone - somewhere in the middle?
That was a helluva dichotomy to offer me...
> how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt?
By reading it, thinking about it and testing it?
Did I somehow give the impression I'm cutting and pasting huge globs of code straight from ChatGPT into a git commit?
There's a weird gulf of incomprehension between people that use AI to help them code and those that don't. I'm sure you're as confused by this exchange as I am.
Working in a codebase with 10s of other developers seems... pretty normal? Not universal sure, but that has to be a decent percent of professional software work. Once you get to even a half dozen people working in a code base I think consistency and clarity take on a significant role.
In my own experience I've worked on repos with <10 other devs where I spent far more effort on consistency and mantainability than getting the thing to work.
I'm not sure where I said that but I certainly didn't intend to give that impression.
I use AI either as an unblocker to get me started, or to write a handful of lines that are too complex to do from memory but not so complex that I can't immediately grok them.
I find both types of usage very satisfying and helpful.
It does generate swats of code, however, you have to review and test it. But, depending on what you are working in/with, you would have to write this yourself anyway; for instance, Go has always so much plumbing, AI simply removes all those keystrokes. And very rigorously; it adds all the err and defer blocks in, which can be 100s and a large % of one go file: what is the point of writing that yourself? It does that very fast as well; if you write the main logic without any of that stuff and ask sonnet to make it 'good code', you write a few lines and get 100s back.
But it is far more useful on verbose 'team written' corporate stuff than on the more reuse intensive tech: in CL or Haskell, the community is far more DRY than Go or JS/TS; you tend to create and reuse many things and your much of the end result is (basically) a DSL; current AI is not very good at that in my experience; it will recreate or hallucinate (when you pressure reuse of previously created things, if there are too many, even though it does fit in the context window) functions all over the place. But many people have the same issue; they don't know, cannot search or forget and will just redo things many times over; AI makes that far easier (as in, no work at all often), so that's the new reality.
I'm not the same person but I share their perspective on this. I do it by treating AI written code the exact same way I treat mine. Extremely suspect and probably terrible on the first iteration, so I heavily test and iterate it until it's great code. If it's not up to my standards, I don't ever put it in a merge request, whether I handwrote it myself or had an AI write it for me.
> I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts.
It does not
You also may find this post from the other day more illuminating[0], as I believe the actual result strongly hints at what you're guessing. The study is high schoolers doing math. While GPT only has an 8% error rate for the final answer, it gets the steps wrong half the time. And with coding (like math), the steps are the important bits.
But I think people evaluate very poorly when there's ill defined metrics but some metric exists. They over inflate it's value since it's concrete. Like completing a ticket doesn't mean you made progress. Introducing technical debt would mean taking a step back. A step forward in a very specific direction but away from the actual end goal. You're just outsourcing work to a future person and I think we like to pretend this doesn't exist because it's hard to measure.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
Is this a bad thing? Maybe I'm misunderstanding it, but even when I'm working on my own projects, I'm usually trying to solve a problem, and the technology is a means to an end to solving that problem (delivering). I care that it works, and is maintainable, I don't care that much about the technology.
No code solutions are often superior.
Around 15 years ago we were shipping terabyte hard disks as it was faster than the internet (until one got stuck in customs)
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
For them programming is a means to an end, and I think it is fine, in a way. But you cannot just ask an AI to write you tiktok clone and expect to get the finished product. Writing software is an iterative process, and LLMs currently used are not good enough for that, because they need not only to answer the questions, but at the very minimum to start asking questions: "why do want to do that ?" "do you prefer this or that", etc., so that they can actually extract all the specification details that the user happily didn't even know he needed before producing an appropriate output. (It's not too different from how some independent developers has to handle their clients, isn't it ?). Probably we will get there, but not too soon.
I also doubt that current tools can keep a project architecturally sound long-term, but that is just an hunch.
I admit though that I may be biased because I don't like much tools like copilot: when I write software, I have in my mind a model of the software that I am writing/I want to write, the AI has another model "in mind" and I need to spend mental energy understanding what it is "thinking". Even if 99/100 it is what I wanted, the remaining 1% is enough to hold me back from trusting it. Maybe I am using it the wrong way, who knows.
The AI tool that work for me would be a "voice controller AI powered pair programmer": I write my code, then from time to time I ask him questions on how to do something, and I can get either an contextual answer depending on the code I am working on, or generate the actual code if I wish so". Are there already plugins working that way for vscode/idea/etc ?
I've been playing with Cursor (albeit with a very small toy codebase) and it does seem like it could do some of what you said - it has a number of features, not all of which necessarily generate code. You can ask questions about the code, about documentation, and other things, and it can optionally suggest code that you can either accept, accept parts of, or decline. It's more of a fork of vscode than a plugin right now though.
It is very nice in that it gives you a handy diffing tool before you accept, and it very much feels like it puts me in control.
> had to tackle after the less experienced devs have contributed their AI-driven efforts.
So, like before AI then? I haven't seen AI deliver illogical nonsense that I couldn't even decipher like I have seen some outsourcing companies deliver.
I have. If you're doing niche-er stuff it doesn't have enough data and hallucinates. The worst is when it spits two screens of code instead of 'this cannot be done at the level you want it'.
> that I couldn't even decipher
That's unrelated to code quality. Especially with C++ which has become as write only as perl.
But that is a HN bubble thing: I work and worked with seniors with 10-15 years under their belt who have no logic bone in their body; the worst (and an AI does not do that) is when there is a somewhat 'busy' function that has an if or switch statement and, over time, to add features or fix bugs, ifs were added. Now after 5+ years, this function is 15000 lines and is somewhat of a trained neural network adjacent thing; 100s of nested ifs, pages long, that cannot be read and cannot be followed even if you have a brain. This is made by senior staff of, usually, outsourcing companies and I have seen it very many times over the past 40 years. Not entry level, not small companies either. I know a gov tax system which was partly maintained by a very large and well known outsourcing company which has numerous of these puppies in production that no one dares to touch.
AI doesn't do stuff like because it could not, which to me, is a good thing. When it gets better, it might start to do it, I don't know.
People here live in a bubble where they think the world is full of people who read 'beautiful code', make tests, use git or something instead of zip$date and know how DeMorgan works; by far, most don't, not juniors, not seniors.
"Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. "
From my experience, most of it is quickly caught in code review. And after a while it occurs less and less, granted that the junior developer puts in the effort to learn why their PRs aren't getting approved.
So, pretty similar to how it was before. Except that motivated junior developers will improve incredibly fast. But that's also kind of always been the case in software development these past two decades?
Code quality is the hardest thing to measure. Seems like they were measuring commits, pull-requests, builds, and build success rate. This sort of gets at that, but is probably inadequate.
The few attempts I've made at using genAI to make large-scale changes to code have been failures, and left me in the dark about the changes that were made in ways that were not helpful. I needed suggestions to be in much smaller chunks. paragraph sized. Right now I limit myself to using the genAI line completion suggestions in Pycharm. It very often guesses my intentions and so actually is helpful, particularly when laboriously typing out lots of long literals, e.g. keys in a dictionary.
I don't remember who said it, but "AI generated code turns every developer into a legacy code maintainer". It's pithy and a bit of an exaggeration, but there's a grain of truth in there that resonates with me.
> Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
You get what you measure. Nobody measure software quality.
Maybe not at your workplaces, but at mine, we measured bugs, change failure rate, uptime, "critical functionality" uptime, regressions, performance, CSAT, etc. in addition to qualitative research on quality in-team and with customers
I don't think it's that clear cut. I personally think the AI often delivers a better solution than the one I had in mind. It always contains a lot more safe guards against edge cases and other "boring" stuff that the AI has no problem adding but others find tedious.
If you're building a code base where AI is delivering on the the details of it, it's generally a bad thing if the code provided by AI provide safeguards WITHIN your code base.
Those kinds of safeguards should instead be part of the framework you're using. If you need to prevent SQL injection, you need to make sure that all access to the SQL type database pass through a layer that prevents that. If you are worried about the security of your point of access (like an API facing the public), you need to apply safeguards as close to the point of entry as possible, and so on.
I'm a big believer in AI generated code (over a long horizon), but I'm not sure the edge case robustness is the main selling point.
Sounds like we're talking about different kind of safeguards. I mean stuff like a character in a game ending up in a broken case due to something that is theoretically possible but very unlikely or where the consequence is not worth the effort. An AI has no problem taking those into account and write tedious safeguards, while I skip it.
This. Even without AI, we have inexperienced developers rolling out something that "just works" without thinking about many of the scaling/availability issues. Then you have to spend 10x the time fixing those issues.
>Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.
You are not a fucking priest in the temple of engineering, go to fucking CS dep at the local uni and be the one and preach it there.
You are worker of the company with customers, which pays you a salary from customers money.
> I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.
If I don't deliver my startup burns in a year.
In my previous role if I didn't deliver the people who were my reports did not get their bonuses.
The incentives are very clear, and have always been clear - deliver.
Succesful companies aren't built in a sprint. I doubt there has ever been a successful startup that didn't have at least some competent people thinking a number of steps ahead. Piling up tech debt to hit some short term arbitrary goals is not a good plan for real success.
Here's a real example of delivering something now without worrying about being the best engineer I can. I have 2 CSRs, They are swamped with work and we're weeks away from bringing another CSR on board. I find a couple of time consuming tasks that are easy to automate and build those out separately as one-off jobs that work well enough. Instantly it's a solid time gain & stress reducer to CSRs.
Are my one-off automation tasks a long term solution? No. Do I care? Not at the moment, and my ego can take a hit for the time being.
I hear this but I don't think this is a new issue that AI brought, it simply magnified it. That's a company culture issue.
It reminds me of a talk Raymond Hettinger put on a while ago about rearranging the flowers in the garden. There is a tendency from new developers to rearrange for no good reason, AI makes it even easier now. This comes down to a culture problem to me, AI is simply the tool but the driver is the human (at least for now).
That's a problem of code organisation though. Large codebases should be split into multiple repos. At the end of the day code structure is not something to be decided only by compilation strategy, but by developer ergonomics as well. A massive repo is a massive burden on productivity.
WinUI 3 aims to be native, but of course Windows is a tapestry of many eras of Microsoft UI styles. If Microsoft doesn't again change their mind on how Windows should look, then WinUI 3 is indeed the look and behaviour that people will be expecting on a Windows machine.
HTMX veering into SPA territory will produce the same kind of code we got when jQuery did it. The apps would work but they would be wholly unmaintainable. HTMX evangelists will just say "See? It CAN do it.", completely disregarding the ergonomics as if they're irrelevant in software development.
HTMX beautifully captures the zone between static and SPA, but devs already seem to want to use the hammer for everything, because all SPAs are suddenly not true SPAs. Well it was similar with jQuery too. The apps were considered to not be complex enough for a different tech stack, but eventually many apps grew more complex in their requirements and there is no sane way to move from jQuery to Angular and those projects got left in the dust.
But the counter argument is also valid; if your app did not grow complex, then you paid an upfront price for that expected growth but didn't reap the rewards.
They're already on a better path with App Intents. Way more solid idea than having AI train how to click buttons on a UI that might change every now and then.
That's just a report on EV-fication of car companies. Looks like that's the biggest factor in whether a car company is considered to be taking action against climate change. I can assure you that Ford and GM couldn't give less of a damn about it and are only above Toyota because they want a piece of that sweet high EV price pie.
PWAs have nothing to do with web views or native apps. They're just a collection of technologies (service worker, caching API, platform APIs, home screen installation, etc) that makes it possible to have offline available web apps that have a shortcut on your desktop/home screen. There is no web view or native app (outside of your default) browser involved.