Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI assistance means that programmers can concern themselves less and less with the particulars of any language.

Sure. Until we need to. Then we face some apparently tiny concern, which is actually deeply intricated with the rest of this whole mess, and we are ready for a ride in the rabbit hole.

> most developers today don’t pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail.

This can be very misguided from my part but I have the feeling they are two very different cases here. Ok, not everyone is a ffmpeg level champion who will thrive in code-golfing ASM til the last drop of cycle gain.

But there are also probably reasons why third-generation programming language lasted without any other subsequent proposal completely displacing them. It’s all about a tradeoff of expressiveness and precision. What we want to keep in the focus zone, and what we want to delegate to mostly uncontrolled details.

If to go faster we need to get rid of a transparent glasses, we will need very sound and solid alternative probes to report what’s going on ahead.





Take into account that this is posted on IEEE.

In my opinion, their target audience are scientists rather than programmers, and a scientist most often think of code as a tool to express his ideas (hence, perfect AI generated code is kind of a graal). The faster he can express them, even if the code is ugly, the better. He does not care to reuse the code later most of the time.

I have the hint that scientists and not programmers are the target audience as other things may trigger only one category but not the other, for example, they consider Arduino a language, This makes totally sense for scientists, as most of the ones using Arduino dont necessarily know C++, but are proud to be able to code in Arduino.


That’s a good point.

For a professional programmer, code and what it does is the object of study. Saying the programmer shouldn’t look at the code is very odd.


But reproducibility is famously a matter of some concern to scientists.

Sure, but their tools are complexity management tools: Hypotheses, experiments, empirical evidence, probabilities. To my knowledge, they deal far less with the determism programmers rely on. It's reproducible if you get similar results with the same probability.

If code is actually viewed as a tool to express ideas, making it easy to read and figure out should be a goal.

I like programming, I like clean code, so it's something I struggled with when I began research.

But actually, producing easy to read code when you don't have specifications, because you don't know yet if the idea will work, and you are discovering problems on that idea as you go doesn't lead to readable code naturally.

You refactor all the time, but then something that you misunderstood becomes a concern, and you need to refactorer again everything, and again and again.. You loose much time, and research is fast paced.

Scientists that spend too much time cleaning code often miss deadlines and deliverables that are actually what they need to produce. Nobody cares about their code, as when the idea is fully developed, other scientist will just rewrite a better software with full view of the problem. (some scientists rewrite their full software when everything is discovered)

I think a sensible goal would be easy to write code instead of easy to read for scientists.


But if you are iterating on code and using an LLM without even looking at the code, there's a reasonable chance that when you prompt "okay, now handle factor y also", you end up with code that handles factor y but also handles pre-existing factor x differently for no good reason. And scientific work is probably more likely than average programming to be numerics stuff where seemingly innocuous changes to how things are computed can have significant impacts due to floats being generally unfriendly.

Totally agree, in my experience we are far from having reliable research code based on prompts.

We are clearly not there yet, but I feel that the article is pushing in that direction, maybe to push research in that direction.

There was a long time ago an article from the creators of Mathematica or maple, I don't remember that said something similar. The question was: why do we learn about matrix operations at school, when (modern) tools are able to perform everything. We should teach at school matrix algebra and let students use the software (a little bit like using calculators). This would allow to make children learn more abstract thinking and test way more interesting ideas. (if someone has the reference I'm interested)

I feel the article follow the same lines. But with current tools.

(of course I'm skipping the fact that Mathematica is deterministic in doing algebra, and LLMs are far from it)


>> most developers today don’t pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail.

If it was even slightly true then we wouldn’t be generating language syntax at all, we’d be generating raw machine code for the chip architectures we want to support. Or even just distributing the prompts and letting an AI VM generate the target machine code later.

That may well happen one day, but we’re not even close right now


Also there’s so much patching in the kernel (for unix) to solve hardware bugs. And a lot of languages depends on C (with all its footguns) to probide that stable foundation. It’s all unseen work that are very important.

> This can be very misguided from my part but I have the feeling they are two very different cases here

They are indeed very different. If your compiler doesn't emit the right output for your architecture, or the highly optimized library you imported breaks on your hardware, you file a bug and, depending on the third party, have help in fixing the issue. Additionally, those types of issues are rare in popular libraries and languages unless you're pushing boundaries, which likely means you are knowledgeable enough to handle those type of edge cases anyway.

If your AI gives you the wrong answer to a question, or outputs incorrect code, it's entirely on you to figure it out. You can't reach out to OpenAI or Anthropic to help you fix the issue.

The former allows you to pretty safely remain ignorant. The latter does not.


Oh dear. Using AI for something you don't understand well is surely a recipe for disaster and should not be encouraged.

My take is that you should be using AI for exactly the same things that you would ask someone a random contractor to do for you, knowing that they won't be there to maintain it later.

On the other hand, one can see it as another layer of abstraction. Most programmers are not aware of how the assembly code generated from their programming language actually plays out, so they rely on the high-level language as an abstraction of machine code.

Now we have an additional layer of abstraction, where we can instruct an LLM in natural language to write the high-level code for us.

natural language -> high level programming language -> assembly

I'm not arguing whether this is good or bad, but I can see the bigger picture here.


Assembly is generally generated deterministically. LLM code is not.

Different compiler versions, target architectures, or optimization levels can generate substantially different assembly from the same high-level program. Determinism is thus very scoped, not absolute.

Also almost every software has know unknowns in terms of dependencies that gets permanently updated. No one can read all of its code. Hence, in real life if you compile on different systems (works on my machine) or again but after some time has passed (updates to compiler, os libs, packages) you will get a different checksum for your build with unchanged high level code that you have written. So in theory given perfect conditions you are right, but in practice it is not the case.

There are established benchmarks for code generation (such as HumanEval, MBPP, and CodeXGLUE). On these, LLMs demonstrate that given the same prompt, the vast majority of completions are consistent and pass unit tests. For many tasks, the same prompt will produce a passing solution over 99% of the time.

I would say yes there is a gap in determinism, but it's not as huge as one might think and it's getting closer as time progresses.


Your comment lacks so much context and nuance to ultimately be nonsense.

You absolutely can, and probably _should_, leverage AI to learn many things you don't understand at all.

Simple example: try picking up or learning a programming language like C with or without LLMs. With is going to be much more efficient. C is one of the languages that LLMs have seen the most, they are very, very good at it for learning purposes (also at bug hunting).

I have never learned as much about computing as in the last 7/8 months of using LLMs to assist me at summarizing, getting information, finding bugs, explaining concepts iteratively (99% of Software books are crap: poorly written and quickly outdated, often wrong), scanning git repositories for implementation details, etc.

You people keep committing the same mistake over and over: there's a million uses to LLMs, and instead of defining the context of what you're discussing about you conflate everything with vibe coding making ultimately your comments nonsense.


I've posted this before, but I think it will be a perennial comment and concern:

Excerpted from Tony Hoare's 1980 Turing Award speech, 'The Emperor's Old Clothes'... "At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted--he always shouted-- "You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution."

My interpretation is that whether shifting from delegation to programmers, or to compilers, or to LLMs, the invariant is that we will always have to understand the consequences of our choices, or suffer the consequences.

Applied to your specific example, yes, LLMs can be a good assistants for learning. I would add that triangulation against other sources and against empirical evidence is always necessary before one can trust that learning.


My context is that I have seen some colleagues try to make up for not having expertise with a particular technology by using LLMs and ultimately they have managed to waste their time and other people's time.

If you want to use LLMs for learning, that's altogether a different proposition.


I kinda knew what you meant, but I also feel it is important to provide the nuance and context.

seems like a significant skill/intelligence issue. someone i know made a web security/pentesting company without ANY prior knowledge in programming or security in general.

and his shit actually works by the way, topping leaderboards on hackerone and having a decent amount of clients.

your colleagues might be retarded or just don’t know how to use llms


Would you recognize a memory corruption bug when the LLM cheerfully reports that everything is perfect?

Would you understand why some code is less performant than it could be if you've never written and learned any C yourself? How would you know if the LLM output is gibberish/wrong?

They're not wrong; it's just not black-and-white. LLMs happen to sometimes generate what you want. Often times, for experienced programmers who can recognize good C code, the LLMs generate too much garbage for the tokens it costs.

I think some people are also arguing that some programmers ought to still be trained in and experienced with the fundamentals of computing. We shouldn't be abandoning that skill set completely. Some one will still need to know how the technology works.


Not sure how your comments relates to mine.

The parent I answered said you shouldn't use LLMs for things you don't understand while I advocate you should use them to help you learn.

You seem to describe very different use cases.

In any case, just to answer your (unrelated to mine) comment, here[1] you can see a video of one of the most skilled C developers on the planet finding very hard to spot bugs in the Redis codebase.

If all your arguments boil down to "lazy people are lazy and misuse LLMs" that's not a criticism of LLMs but of their lack of professionalism.

Humans are responsible for AI slop, not AI. Skilled developers are enhanced by such a great tool that they know how and when to use.

[1] https://www.youtube.com/watch?v=rCIZflYEpEk


I was commenting on relying completely on the LLM when learning a language like C when you don’t have any prior understanding of C.

How do people using LLMs this way know that the generated code/text doesn’t contain errors or misrepresentations? How do they find out?


>The parent I answered said you shouldn't use LLMs for things you don't understand while I advocate you should use them to help you learn.

Someone else interpretation is not the author's saying. :)

Since the tone is so aggressive, it doesn't feel like it would be easy to build any constructive discussion on this ground.

Acting prudently is not blind rejection, the latter being not wiser than blind acceptance.


Would you mind sharing some of the ways that you leverage LLMs in your learning?

Some of mine:

* Converse with the LLM on deeper concepts

* use the `/explain` hook in VSCode for code snippets I'm struggling with

* Have it write blog-style series on a topic, replete with hyperlinks

I have gotten in some doom loops though when having it try to directly fix my code, often because I'm asking it to do something that is not feasible, and its sycophantic tendencies tend to amplify this. I basically stopped using agentic tools to implement solutions that use tech I'm not already comfortable with.

I've used it for summarization as well, but I often find that a summary of a man page or RFC is insufficient for deeper learning. It's great for getting my feet wet and showing me gaps in my understanding, but always end up having to read the spec at the end


Good at bug hunting?

Have you heard about how much AI slop has been submitted as "bug" - but always turn out to be not a bug - to the curl project?


All they've done is widened the plank over the abyss.

>> ...deeply intricated with... I think you invented a new phrase. And it's a good one!



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: