For me, on small personal projects, I can get a project to a point in about 4 hours where previous to new AI tools it would’ve taken about 40. At work, there is a huge difference due to the complexity of the code base and services. Using agents to code for me in these cases as 100% been the loop of iterating on something so often, I would’ve been better off with a more hands on approach, essentially just reviewing PRs written by AI.
Yeah, I don't understand this comparison. I've programmed for years in higher level languages professionally and never learned assembly and never got stuck because the higher level language was limited or doing something wrong.
Whenever I use an LLM I always need to review its output because usually there is something not quite right. For context I'm using VS copilot, mostly ask and agent mode, in a large brownfield project.
People keep comparing higher-level programming languages to lower-level abstractions - these comparisons are absolutely false. The whole point of higher-level programming languages is for people to get away from working with the lower level stuff.
But with the way software engineers are interacting with LLMs, they are not getting away from writing code because they have to use what comes out of it to achieve their goal (writing and piecing together code to complete a project).
My career sat at the interface of hardware and software. We would often run into situations where the code produced by the compiler was not what we desired. This issue was particularly pronounced when we were transitioning some components from being written in assembly by hand vs using a compiler.
I think the parallels are clear for those of us who have been through this scenario.
In reality, the outcome doesn't appear to be the result of "pure chaos and randomness" if you ground your tools. Test cases and instructions do a fantastic job of keeping them focused and moving down the right path.
If I see an LLM consistently producing something I don't like, I'll either add the correct behavior to the prompt, or create a tool that will tell it if it messed up or not, and prompt it to call the tool after each major change.
In the previous scenario, programmers were still writing the code themselves. The compilers, if they were any good, generated deterministic code.
In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.
Compilers are (a) typically non-deterministic and, (b) produce different code from one version to the next, from one brand to the next, and from one set of flags to the next.
To some degree you're correct -- LLMs can be viewed as the kind of "sufficiently advanced" compiler we've always dreamed of. Our dreams didn't include the compiler lying to us though, so we have not achieved utopia.
LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.
This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.
It's not. And people are realizing that, which is causing them to bring back and reinvent aspects of software engineering to AI coding to make it more tolerable. People once questioned whether AI will replace all programming languages with natural language interfaces, it now looks like programming languages will be reinvented in the context of AI to make their natural language interface more tool-like.
It's a change in mindset. AI is like having your own junior developer. If you've never had direct reports before where you have to give them detailed instruction and validate their code then you're right, it might end up more exhausting than just doing the work yourself.
My brother legit invested in a company some 60$ in a company that chatgpt recommended, then he saw that it makes sense.
The day he bought, everything went downhill in that particular company lol. But to be fair, he said that he just had this as chump change and basically wanted to just invest but didn't know what to (I have repeatedly told my brother that invest funds are cool and he has started to agree {I think})
Also don't forget all the people atleast in the crypto alt space showing screenshots saying that grok/chatgpt (since they only know these two most lol) are saying that their X crypto is underrated or it can increase its marketcap to Y% of total market or it has potential to grow Z times and it is the Nth most favourite crypto or whatever.
Trust me, its already happening man but I think its happening in chump change.
The day it starts to happen in like Thousand's of dollars worth of investment is the day when things would be really really wrong
I don't know if "AI" will be able to do 100%, or even 90%, of my job in the next year(s). But I do know what I can see now: "AI" is making more bad than good.
Billions of dollars litterally burned in weird acquisitions and power, huge power consumptions and, the worst one maybe: the enshittification.
Is it really this what we want? Or it's what investors want?
What's so strange about that as a concept? As someone who grew up in the 90s, the idea of doing so is totally normal for me. I mean, I don't do it, but I wouldn't blink at someone else saying they did.
How do you think people achieve the goal of 50 books in a year, for example?
There's also chores, talking with people, board games, going for a walk, a bath, sex, exercise, just doing nothing in particular for a bit, etc. The choice isnt only screens or books.
Why not? You can make reading a habit if you want to. I find it highly rewarding and glad I have this habit instead of something else like scrolling social media.
Not sure if you are being facetious. I'll put my hand up as someone who reads at least an hour at night before going to sleep. Like anything it becomes normal eventually.
As a matter of fact I'll put my phone away and get my book out.
I have seen those comments; but I do wonder, to what extent that is because the comments' authors intended such positions vs. the subtlety and nuance are hard to write and easy to overlook when reading? (Ironically, humans are more boolean than LLMs, the word "nuance" itself seems a bit like ChatGPT's voice).
I'm sure people place me closer to #1 than I actually feel, simply because I'm more often responding to people who seem to be too far in the #2 direction than vice versa.
Your comment seems pretty accurate because, from my perspective, I've never seen comments of type #1. And so, despite me explicitly saying otherwise, people like the GP commenter may be reading my comments as #1.
The first ('''So this "one weird trick" will disappear without any special measures''' etc.) does not seem so to me, I do not read that as a claim of perfection, merely a projection of the trends already seen.
The second ('''If the computer can see it we have a discriminator than we can use in a GAN-like fashion to train the network not to make that mistake again.''') I agree with you, that's overstating what GANs can do. They're good, they're not that good.
The third ('''Once you highlight any inconsistency in AI-generated content, IMHO, it will take a nothingth of a second to "fix" that.''') I'd lean towards agreeing with you, that seems to understate the challenges involved.
The fourth ('''Well, nice find, but now all the fakes have to do is add a new layer of AI that knows how to fix the eyes.''') is technically correct, but contrary to the meme this is not the best kind of correct, and again it's downplaying the challenge same as the previous (but it is unclear to me if this is because nuance is hard to write and to read or the genuine position). Also, once you're primed to look for people who underestimate the difficulties I can easily see why you would see it as such an example as it's close enough to be ambiguous.
You could just... try it. It's very impressive what it can do. It's not some catch-all solution to everything but it saves me hours of time every week. Some of the things it can do are really quite amazing; my real-life example:
I took a picture of my son's grade 9 math homework worksheet and asked ChatGPT to tell me which questions he got wrong. It did that perfectly.
But I use for the more mundane stuff like "From this long class definition, can you create a list of assignments for each property that look this: object1.propertyName = object2.propertyName" and poof.
I think its because at this point there is nothing else interesting to say. We've all seen AI generated images that look impressively real. We've also all seen artifacts proving they aren't perfect. None of this is really new at this point.
Plenty of people think AI is useful (and equally as dangerous). Only useful, not redefines-everything. “I use AI as an assistant” is a common sentiment.
> 1 -> AI is awesome and perfect, if it isn't, another AI will make it perfect 2 -> AI is just garbage and will always be garbage
3 -> An awesome AI will actually predictably be a deep negative for nearly all people (for much more mundane reasons than the Terminator-genocide-cliche), so the progress is to be dreaded and the garbage-ness hoped for.
Your 1 is warmed over techno-optimism, which is far past its sell-by date but foundational to the tech entrepreneurship space. Your 2 greatly underestimates what tech people can deliver.
Is there a way to persist the file even after updates?