Hacker News new | past | comments | ask | show | jobs | submit login

"But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come..."

"You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply."

These two ideas are closely related and really just different aspects of the same basic frailty of the human intellect. Understanding that I think can really inform you about how you might use these tools in work (or life) and where the lines need to be drawn for your own personal circumstance.

I can't say I disagree with anything you said and think you've made an insightful observation.






In the presence of sufficiently good and ubiquitous tools, knowing how to do some base thing loses most or all of its value.

In a world where everyone has a phone/calculator in their pocket, remembering how to do long division on paper is not worthwhile. If I ask you "what is 457829639 divided by 3454", it is not worth your time to do that by hand rather than plugging it into your phone's calculator.

In a world where AI can immediately produce any arbitrary 20-line glue script that you would have had to think about and remember bash array syntax for, there's not a reason to remember bash array syntax.

I don't think we're quite at that point yet but we're astonishingly close.


The value isn't in rote calculation, but the intuition that doing it gives you.

So yes, it's pretty useless for me to manually divide arbitrarily large numbers. But it's super useful for me to be able to reason around fractions and how that division plays out in practice.

Same goes for bash. Knowing the exact syntax is useless, but knowing what that glue script does and how it works is essential to understanding how your entire program works.

That's the piece I'm scared of. I've seen enough kids through tutoring that just plug numbers into their calculator arbitrarily. They don't have any clue when a number is off by a factor of 10 or what a reasonable calculation looks like. They don't really have a sense for when something is "too complicated" either, as the calculator does all of the work.


I totally agree.

The neat thing about AI generated bash scripts, would be that the AI can comment their code.

So the user can 1) check if the comment for each step match what they expect to be done, and 2) have a starting point to debug if something goes wrong.


Go ahead and ask chat gpt how that glue script works. You'll be incredibly satisfied at its detailed insights.

> If I ask you "what is 457829639 divided by 3454"

And if it spits out 15,395,143 I hope you remember enough math to know that doesn’t look right, and how to find the actual answer if you don’t trust your calculator’s answer.


Sanity Checking Expected Output is one of the most vital skills a person can have. It really is. But knowing the general shape of the thing is different than any particular algorithm, don't you think?

This gets to the root of the issue. The use case, and user experience, and thus outcome is, is remarkably different depending on your current ability.

Using AI to learn things is useful, because it helps you get terminology right, and helps you Google search well. For example say you need to know a Windows API, you can describe it snd get the name. Then Google how that works.

As an experienced user you can get it to write code. You're good enough to spot errors in the vote and basically just correct as you go. 90% right is good enough.

It's the in-between space which is hardest. You're an inexperienced dev looking to produce, not learn. But you lack the experience and knowledge to recognise the errors, or bad patterns, or whatever. Using AI you end up with stuff that's 'mostly right' - which in programming terms means broken.

This experience difference is why there's so much chatter about usefulness. To some groups it's very useful. To others it's a dangerous crutch.


This is both inspiring and terrifying at the same time.

That being said I usually prefer to do something the long and manual way, write the process down sometimes, and afterwards search for easier ways to do it. Of course this makes sense on a case by case basis depending on your personal context.

Maybe stuff like crosswords and more will undergo a renaissance and we'll see more interesting developments like Gauguin[0] which is a blend of Sudoku and math.

[0] https://f-droid.org/en/packages/org.piepmeyer.gauguin/


Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.

The difference is that you can trust a good calculator. You currently can't trust AI to be right. If we get a point where the output of AI is trustworthy, that's a whole different kind of world altogether.


>The difference is that you can trust a good calculator.

I found a bug in the ios calculator in the middle of a masters degree exam. The answer changed depending on which way the phone was held. (A real bug - I reported it and they fixed it). So knowing the expected result matters even when using the calculator.


For replacement like I described, sure. But it will be very useful long before that.

AI that writes a bash script doesn't need to be better than an experienced engineer. It doesn't even need to be better than a junior engineer.

It just needs to be better than Stack Overflow.

That bar is really not far away.


You’re changing the goal post. Your original post was saying that you don’t need to know fundamentals.

It was not about whether AI is useful or not.


I'm not changing goalposts, I was responding to what you said about AI spitting out something wrong and you spending 3 hours debugging it.

My original point about not needing fundamentals would obviously require AI to, y'know, not hallucinate errors that take three hours to debug. We're clearly not there yet. The original goalposts remain the same.

Since human conversations often flow from one topic to another, in addition to the goal post of "not needing fundamentals" in my original post, my second post introduced a goalpost of "being broadly useful". You're correct that it's not the same goalpost as in my first comment, which is not unexpected, as the comment in question is also not my first comment.


Hopefully that happens rare enough that when it does, we can call upon highly-paid human experts that still remembers the art of doing long divisions.

> Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.

This is basically how AI research is conducted. It's alchemy.


>>The difference is that you can trust a good calculator. You currently can't trust AI to be right.

Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.

Ask the smallest possible for loop and if loop that AI can generate now you have the pocket calculator equivalent of programming.


>> Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.

Is it? What is 5/2+3?


There is only one correct way to calculate 5/2+3. The order is PEMDAS[0]. You divide before adding. Maybe you are thinking that 5/(2+3) is the same as 5/2+3, which is not the case. Improper math syntax doesn’t mean there are two potential answers, but rather that the person that wrote it did so improperly.

[0] https://www.mathsisfun.com/operation-order-pemdas.html


Maybe user means the difference between a simple calculator that does everything as you type it in and one that can figure out the correct order. We used those simpler ones in school when I was young. The new fancy ones were quite something after that :)

So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.

“Which is a question that can be interpreted in only one way. And done only one way.”

The question for calculators is then the same as the question for LLMs: can you trust the calculator? How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?


> So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.

No. There being "more than one way" to interpret implies the meaning is ambiguous. It's not.

There's not one incorrect way to interpret that math statement, there are infinite incorrect ways to do so. For example, you could interpret as being a poem about cats.


>>How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?

This is just splitting hairs. People who use calculators interpret it in only one way. You are making a different and a more broad argument that words/symbols can have various meanings, hence anything can be interpreted in many ways.

While these are fun arguments to be made. They are not relevant to practical use of the calculator or LLMs.


I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.

>>I wonder if my coding skill will deteriorate in the years to come...

Well that's not how LLMs work. Don't use an LLM to do thinking for you. You use LLMs to work for you, while you tell(after thinking) it what's to be done.

Basically things like-

. Attach a click handler to this button with x, y, z params and on click route it to the path /a/b/c

. Change the color of this header to purple.

. Parse the json in param 'payload' and pick up the value under this>then>that and return

etc. kind of dictation.

You don't ask big questions like 'Write me a todo app', or 'Write me this dashboard'. Those are too broad questions.

You will still continue to code and work like you always have. Except that you now have a good coding assistant that will do the chore of typing for you.


Maybe I'm too good at my editor (currently Emacs, previously Vim), but the fact is that I can type all of this faster than dictating it to an AI and verifying its output.

Yes, editor proficiency is something that beats these things any day.

In fact if you are familiar with keyboard macros, in both vim and emacs you can do a lot of text heavy lifting tasks.

I don't see these as opposing traits. One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?


> One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?

I mostly use manuals, books, and the occasional forum searches. And the advantage is that you pick surrounding knowledge. And more consistent writing. And today, I know where some of the good stuff are. You're not supposed to learn everything in one go. I built a knowledge map where I can find what I want in a more straightforward manner. No need to enter in a symbiosis with an LLM.


Well its entirely an individual choice to make. But I don't generally view the world in terms of ORs I view them in terms of ANDs.

One can do pick and use multiple good things at a time. Using vim doesn't mean, I won't use vscode, or vice versa. Or that if you use vscode code you must not use AI with it.

Having access to a library doesn't mean, one must not use Google. One can use both or many at one time.

There are no rules here, the idea is to build something.


I asked o1 to make an entire save system for a game/app I’m working on in Unity with some pretty big gotchas (Minecraft-like chunk system, etc) and it got pretty close to nailing it first try - and what it didn’t get was due to me not writing out some specifics.

I honestly don’t think we’re far out from people being able to write “Write me a todo app” and then telling it what changes to make after.

I recently switched back to software development from professional photography and I’m not sure if that’s a mistake or not.


I think that anybody who finds the process of clumsily describing the above examples to an LLM in some text box using english and waiting for it to spit out some code which you hope is suitable for your given programming context and codebase more efficient than just expressing the logic directly in your programming language in an efficient editor, probably suffers from multiple weaknesses:

- Poor editor / editing setup

- Poor programming language and knowledge thereof

- Poor APIs and/or knowledge thereof

Mankind has worked for decades to develop elegant and succinct programming languages within which to express problems and solutions, and compilers with deterministic behaviour to "do the work for us".

I am surprised that so many people in the software engineering field are prepared to just throw all of this away (never mind develop it further) in exchange for using a poor "programming language" (say, english) to express problems clumsily in a roudabout way, and then throw away the "source code" (the LLM prompt) entirely such to simply paste the "compiler output" (code the LLM spewed out which may or may not be suitable or correct) into some heterogenous mess of multiple different LLM outputs pasted together in a codebase held together by nothing more than the law of averages, and hope.

Then there's the fun fact that every single LLM prompt interaction consumes a ridiculous amount of energy - I heard figures such as the total amount required to recharge a smartphone battery - in an era where mankind is racing towards an energy cliff. Vast, remote data centres filled with GPUs spewing tonnes of CO₂ and massive amounts of heat to power your "programming experience".

In my opinion, LLMs are a momentous achievement with some very interesting use-cases, but they are just about the most ass-backwards and illogical way of advancing the field of programming possible.


There's a new mode of programming (with AI) that doesn't require english and also results in massive efficiency gains. I now only need to begin a change and the AI can normally pick up on the pattern and do the rest, via subsequent "tab" key hits as I audit each change in real time. It's like I'm expressing the change I want via a code example to a capable intern that quickly picks up on it and can type at 100x my speed but not faster than I read.

I'm using Cursor btw. It's almost a different form factor compared to something like GH copilot.

I think it's also worth noting that I'm using TypeScript with a functional programming style. The state of the program is immutable and encoded via strongly typed inputs and outputs. I spend (mental) effort reifying use-cases via enums or string literals, enabling a comprehensive switch over all possible branches as opposed to something like imperative if statements. All this to say, that a lot of the code I write in this type of style can be thought of as a kind of boilerplate. The hard part is deciding what to do; effecting the change through the codebase is more easily ascertained from a small start.


Provided that we ignore the ridiculous waste of energy entailed by calling an online LLM every time you type a word in your editor - I agree that the utility of LLM-assisted programming as "autocomplete on steriods" can be very useful. It's awfully close to that of a good editor using the type system of a good programming language providing suggestions.

I too love functional programming, and I'm talking about Haskell-levels of programming efficiency and expressiveness here, BTW.

This is quite a different use case than those presented by the post I was replying to though.

The Go programming language has this mantra of "a little bit of copy and paste is better than a little bit of dependency on other code". I find that LLM-derived source code takes this mantra to an absurd extreme, and furthermore that it encourages a though pattern that never leads you to discover, specify, and use adequate abstractions in your code. All higher-level meaning and context is lost in the end product (your committed source code) unless you already think like a programmer _not_ being guided by an LLM ;-)

We do digress though - the original topic is that of LLM-assisted writing, not coding. But much of the same argument probably applies.


when you take energy into account its like anti engineering. What if we used a mountain of effort to achieve a worse result?

At the time I'm writing this, there are over 260 comments to this article and yours is still the only one that mentions the enormous energy consumption.

I wonder whether this is because people don't know about it or because they simply don't care...

But I, for one, try to use AI as sparingly as possible for this reason.


You're not alone. With the inclusion of gemini generated answers in google search, its going down the road of most capitalistic things. Where you see something is wrong, but you have no option to use it even if you don't want it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: