Hacker News new | past | comments | ask | show | jobs | submit | wcfrobert's comments login

Lots of interesting debates in this thread. I think it is worth placing writing/coding tasks into two buckets. Are you producing? Or are you learning?

For example, I have zero qualms about relying on AI at work to write progress reports and code up some scripts. I know I can do it myself but why would I? I spent many years in college learning to read and write and code. AI makes me at least 2x more efficient at my job. It seems irrational not to use it. Like a farmer who tills his land by hand rather than relying on a tractor because it builds character or something. But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come...

On the other hand, if you are a student trying to learn something new, relying on AI requires walking a fine line. You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply. At the same time, if you under-rely on AI, you drastically decrease the rate at which you can learn new things.

In the old days, people were fit because of physical labor. Now people are fit because they go to the gym. I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?


"But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come..."

"You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply."

These two ideas are closely related and really just different aspects of the same basic frailty of the human intellect. Understanding that I think can really inform you about how you might use these tools in work (or life) and where the lines need to be drawn for your own personal circumstance.

I can't say I disagree with anything you said and think you've made an insightful observation.


In the presence of sufficiently good and ubiquitous tools, knowing how to do some base thing loses most or all of its value.

In a world where everyone has a phone/calculator in their pocket, remembering how to do long division on paper is not worthwhile. If I ask you "what is 457829639 divided by 3454", it is not worth your time to do that by hand rather than plugging it into your phone's calculator.

In a world where AI can immediately produce any arbitrary 20-line glue script that you would have had to think about and remember bash array syntax for, there's not a reason to remember bash array syntax.

I don't think we're quite at that point yet but we're astonishingly close.


The value isn't in rote calculation, but the intuition that doing it gives you.

So yes, it's pretty useless for me to manually divide arbitrarily large numbers. But it's super useful for me to be able to reason around fractions and how that division plays out in practice.

Same goes for bash. Knowing the exact syntax is useless, but knowing what that glue script does and how it works is essential to understanding how your entire program works.

That's the piece I'm scared of. I've seen enough kids through tutoring that just plug numbers into their calculator arbitrarily. They don't have any clue when a number is off by a factor of 10 or what a reasonable calculation looks like. They don't really have a sense for when something is "too complicated" either, as the calculator does all of the work.


I totally agree.

The neat thing about AI generated bash scripts, would be that the AI can comment their code.

So the user can 1) check if the comment for each step match what they expect to be done, and 2) have a starting point to debug if something goes wrong.


Go ahead and ask chat gpt how that glue script works. You'll be incredibly satisfied at its detailed insights.

> If I ask you "what is 457829639 divided by 3454"

And if it spits out 15,395,143 I hope you remember enough math to know that doesn’t look right, and how to find the actual answer if you don’t trust your calculator’s answer.


Sanity Checking Expected Output is one of the most vital skills a person can have. It really is. But knowing the general shape of the thing is different than any particular algorithm, don't you think?

This gets to the root of the issue. The use case, and user experience, and thus outcome is, is remarkably different depending on your current ability.

Using AI to learn things is useful, because it helps you get terminology right, and helps you Google search well. For example say you need to know a Windows API, you can describe it snd get the name. Then Google how that works.

As an experienced user you can get it to write code. You're good enough to spot errors in the vote and basically just correct as you go. 90% right is good enough.

It's the in-between space which is hardest. You're an inexperienced dev looking to produce, not learn. But you lack the experience and knowledge to recognise the errors, or bad patterns, or whatever. Using AI you end up with stuff that's 'mostly right' - which in programming terms means broken.

This experience difference is why there's so much chatter about usefulness. To some groups it's very useful. To others it's a dangerous crutch.


This is both inspiring and terrifying at the same time.

That being said I usually prefer to do something the long and manual way, write the process down sometimes, and afterwards search for easier ways to do it. Of course this makes sense on a case by case basis depending on your personal context.

Maybe stuff like crosswords and more will undergo a renaissance and we'll see more interesting developments like Gauguin[0] which is a blend of Sudoku and math.

[0] https://f-droid.org/en/packages/org.piepmeyer.gauguin/


Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.

The difference is that you can trust a good calculator. You currently can't trust AI to be right. If we get a point where the output of AI is trustworthy, that's a whole different kind of world altogether.


>The difference is that you can trust a good calculator.

I found a bug in the ios calculator in the middle of a masters degree exam. The answer changed depending on which way the phone was held. (A real bug - I reported it and they fixed it). So knowing the expected result matters even when using the calculator.


For replacement like I described, sure. But it will be very useful long before that.

AI that writes a bash script doesn't need to be better than an experienced engineer. It doesn't even need to be better than a junior engineer.

It just needs to be better than Stack Overflow.

That bar is really not far away.


You’re changing the goal post. Your original post was saying that you don’t need to know fundamentals.

It was not about whether AI is useful or not.


I'm not changing goalposts, I was responding to what you said about AI spitting out something wrong and you spending 3 hours debugging it.

My original point about not needing fundamentals would obviously require AI to, y'know, not hallucinate errors that take three hours to debug. We're clearly not there yet. The original goalposts remain the same.

Since human conversations often flow from one topic to another, in addition to the goal post of "not needing fundamentals" in my original post, my second post introduced a goalpost of "being broadly useful". You're correct that it's not the same goalpost as in my first comment, which is not unexpected, as the comment in question is also not my first comment.


Hopefully that happens rare enough that when it does, we can call upon highly-paid human experts that still remembers the art of doing long divisions.

> Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.

This is basically how AI research is conducted. It's alchemy.


>>The difference is that you can trust a good calculator. You currently can't trust AI to be right.

Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.

Ask the smallest possible for loop and if loop that AI can generate now you have the pocket calculator equivalent of programming.


>> Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.

Is it? What is 5/2+3?


There is only one correct way to calculate 5/2+3. The order is PEMDAS[0]. You divide before adding. Maybe you are thinking that 5/(2+3) is the same as 5/2+3, which is not the case. Improper math syntax doesn’t mean there are two potential answers, but rather that the person that wrote it did so improperly.

[0] https://www.mathsisfun.com/operation-order-pemdas.html


Maybe user means the difference between a simple calculator that does everything as you type it in and one that can figure out the correct order. We used those simpler ones in school when I was young. The new fancy ones were quite something after that :)

So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.

“Which is a question that can be interpreted in only one way. And done only one way.”

The question for calculators is then the same as the question for LLMs: can you trust the calculator? How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?


> So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.

No. There being "more than one way" to interpret implies the meaning is ambiguous. It's not.

There's not one incorrect way to interpret that math statement, there are infinite incorrect ways to do so. For example, you could interpret as being a poem about cats.


>>How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?

This is just splitting hairs. People who use calculators interpret it in only one way. You are making a different and a more broad argument that words/symbols can have various meanings, hence anything can be interpreted in many ways.

While these are fun arguments to be made. They are not relevant to practical use of the calculator or LLMs.


I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.

>>I wonder if my coding skill will deteriorate in the years to come...

Well that's not how LLMs work. Don't use an LLM to do thinking for you. You use LLMs to work for you, while you tell(after thinking) it what's to be done.

Basically things like-

. Attach a click handler to this button with x, y, z params and on click route it to the path /a/b/c

. Change the color of this header to purple.

. Parse the json in param 'payload' and pick up the value under this>then>that and return

etc. kind of dictation.

You don't ask big questions like 'Write me a todo app', or 'Write me this dashboard'. Those are too broad questions.

You will still continue to code and work like you always have. Except that you now have a good coding assistant that will do the chore of typing for you.


Maybe I'm too good at my editor (currently Emacs, previously Vim), but the fact is that I can type all of this faster than dictating it to an AI and verifying its output.

Yes, editor proficiency is something that beats these things any day.

In fact if you are familiar with keyboard macros, in both vim and emacs you can do a lot of text heavy lifting tasks.

I don't see these as opposing traits. One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?


> One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?

I mostly use manuals, books, and the occasional forum searches. And the advantage is that you pick surrounding knowledge. And more consistent writing. And today, I know where some of the good stuff are. You're not supposed to learn everything in one go. I built a knowledge map where I can find what I want in a more straightforward manner. No need to enter in a symbiosis with an LLM.


I asked o1 to make an entire save system for a game/app I’m working on in Unity with some pretty big gotchas (Minecraft-like chunk system, etc) and it got pretty close to nailing it first try - and what it didn’t get was due to me not writing out some specifics.

I honestly don’t think we’re far out from people being able to write “Write me a todo app” and then telling it what changes to make after.

I recently switched back to software development from professional photography and I’m not sure if that’s a mistake or not.


I think that anybody who finds the process of clumsily describing the above examples to an LLM in some text box using english and waiting for it to spit out some code which you hope is suitable for your given programming context and codebase more efficient than just expressing the logic directly in your programming language in an efficient editor, probably suffers from multiple weaknesses:

- Poor editor / editing setup

- Poor programming language and knowledge thereof

- Poor APIs and/or knowledge thereof

Mankind has worked for decades to develop elegant and succinct programming languages within which to express problems and solutions, and compilers with deterministic behaviour to "do the work for us".

I am surprised that so many people in the software engineering field are prepared to just throw all of this away (never mind develop it further) in exchange for using a poor "programming language" (say, english) to express problems clumsily in a roudabout way, and then throw away the "source code" (the LLM prompt) entirely such to simply paste the "compiler output" (code the LLM spewed out which may or may not be suitable or correct) into some heterogenous mess of multiple different LLM outputs pasted together in a codebase held together by nothing more than the law of averages, and hope.

Then there's the fun fact that every single LLM prompt interaction consumes a ridiculous amount of energy - I heard figures such as the total amount required to recharge a smartphone battery - in an era where mankind is racing towards an energy cliff. Vast, remote data centres filled with GPUs spewing tonnes of CO₂ and massive amounts of heat to power your "programming experience".

In my opinion, LLMs are a momentous achievement with some very interesting use-cases, but they are just about the most ass-backwards and illogical way of advancing the field of programming possible.


There's a new mode of programming (with AI) that doesn't require english and also results in massive efficiency gains. I now only need to begin a change and the AI can normally pick up on the pattern and do the rest, via subsequent "tab" key hits as I audit each change in real time. It's like I'm expressing the change I want via a code example to a capable intern that quickly picks up on it and can type at 100x my speed but not faster than I read.

I'm using Cursor btw. It's almost a different form factor compared to something like GH copilot.

I think it's also worth noting that I'm using TypeScript with a functional programming style. The state of the program is immutable and encoded via strongly typed inputs and outputs. I spend (mental) effort reifying use-cases via enums or string literals, enabling a comprehensive switch over all possible branches as opposed to something like imperative if statements. All this to say, that a lot of the code I write in this type of style can be thought of as a kind of boilerplate. The hard part is deciding what to do; effecting the change through the codebase is more easily ascertained from a small start.


Provided that we ignore the ridiculous waste of energy entailed by calling an online LLM every time you type a word in your editor - I agree that the utility of LLM-assisted programming as "autocomplete on steriods" can be very useful. It's awfully close to that of a good editor using the type system of a good programming language providing suggestions.

I too love functional programming, and I'm talking about Haskell-levels of programming efficiency and expressiveness here, BTW.

This is quite a different use case than those presented by the post I was replying to though.

The Go programming language has this mantra of "a little bit of copy and paste is better than a little bit of dependency on other code". I find that LLM-derived source code takes this mantra to an absurd extreme, and furthermore that it encourages a though pattern that never leads you to discover, specify, and use adequate abstractions in your code. All higher-level meaning and context is lost in the end product (your committed source code) unless you already think like a programmer _not_ being guided by an LLM ;-)

We do digress though - the original topic is that of LLM-assisted writing, not coding. But much of the same argument probably applies.


when you take energy into account its like anti engineering. What if we used a mountain of effort to achieve a worse result?

At the time I'm writing this, there are over 260 comments to this article and yours is still the only one that mentions the enormous energy consumption.

I wonder whether this is because people don't know about it or because they simply don't care...

But I, for one, try to use AI as sparingly as possible for this reason.


You're not alone. With the inclusion of gemini generated answers in google search, its going down the road of most capitalistic things. Where you see something is wrong, but you have no option to use it even if you don't want it.

> a certain degree of "productive struggle" is essential

Honestly, I'm not sure this would account for most of the difficulty in learning. In my experience most of the difficulty involved in learning something involved a few missing pieces of insight. It often took longer to understand the few missing pieces than the rest of the topic. If they are accurate enough, LLMs are great for getting yourself unstuck and keep yourself moving. Although it has always been a part of the learning experience, I'm not sure frantically looking through hundreds of explanations for a missing detail is a better use of one's time than to dig deeper in the time you save.


I'm not saying you're wrong, but I wonder if this "missing piece of insight" is at least sometimes an illusion, as in the "monads are like burritos" fallacy [0]. Of course this does not apply if there really is just a missing fact that too many explanations glossed over.

[0] https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...


I once knew someone who studied CS and medicine at the same time. According to them, if you didn't understand something in CS after reasonable effort, you should do something else and try again next semester. But if you didn't understand something in medicine, you just had to work harder. Sometimes it's enough that you have the right insights and cognitive tools. And sometimes you have to be familiar with the big picture, the details, and everything in between.

ideally you look and fail and exhaust your own efforts, then get unblocked with a tool or assistant or expert. With LLMs at your finger tips who has both the grit to struggle and the self discipline not to quit early? at the age of the typical student - very few.

one could argue aswell that having at least generally satisfying, but at the same time omnipresent "expert assistance" might rather end up empowering you.

Feeling confident to be able to shrug off blockers, that might otherwise turn exploration into a painful egg hunt for trivial unknowns, can easily mean the difference between learning and abandoning.


Do you advocate that students learn without the help of teachers until they exhaust their own efforts?

That actually is an approach. Some teachers make you read the lesson before class, others give you homework on the lesson before lecturing it, and some even quiz you on it on top of that before allowing you to ask questions. I personal feel that trying to learn the material before class helped me learn the material better than coming into class blind.

That’s the “flipped classroom” approach to pedagogics, for those who might be interested.

> Are you producing? Or are you learning?

> AI makes me at least 2x more efficient at my job. It seems irrational not to use it

Fair, but there is a corollary here -- the purpose of learning at least in part is to prepare you for the workforce. If that is the case, then one of the things students need to get good at is conversing with LLMs, because they will need to do so to be competitive in the workplace. I find it somewhat analogous to the advent of being able to do research on the internet, which I experienced as an early 90s kid, where everyone was saying "now they won't know how to do research anymore, they won't know the Dewey decimal system, oh no!". Now the last vestiges of physical libraries being a place where you even can conduct up-to-date research on most topics are crumbling, and research _just is_ largely done online in some form or another.

Same thing will likely happen with LLMs, especially as they improve in quality and accuracy over the next decade, and whether we like it or not.


A big one for me was nobody will know how to look up info in a dictionary or encyclopedia. Yep I guess that's true. And nobody would want to now either!

A current 3rd year college student here. I really want LLMs to help me in learning but the success rate is 0.

They often can not generate relatively trivial code When they do, they can not explain that code. For example, I was trying to learn socket programing in C. Claude generated the code, but when I stared asking about stuff, it regressed hard. Also, often the code is more complex than it needs to be. When learning a topic, I want that topic, not the most common relevant code with all the spagheti used on github.

For other subjects, like dbms, computer network, when asking about concepts, you better double check, because they still make stuff up. I asked ChatGPT to solve prev year question for dbms, and it gave a long, answer which looked good on surface. But when I actually read through because I need to understand what it is doing, there were glaring flaws. When I point them out, it makes other mistakes.

So, LLMs struggle to generate concise to the point code. They can not explain that code. They regularly make stuff up. This is after trying Claude, ChatGPT and Gemini with their paid versions in various capacities.

My bottom line is, I should NEVER use a LLM to learn. There is no fine line here. I have tried again and again because tech bros keep preaching about sparks of AGI, making startup with 0 coding skills. They are either fools or genius.

LLMs are useful strictly if you already know what you are doing. That's when your productivity gains are achieved.


Brace yourself, people who are going to come to tell you that it was all your fault are here!

I got bullied at a conference (I was in the audience) because when the speaker asked me, I said AI is useless for my job.

My suspicion is that these kind of people basically just write very simple things over and over and they have 0 knowledge of theory or how computers work. Also their code is probably garbage but it sort-of works for the most common cases and they think that's completely normal for code.


I'm starting to suspect that people generally have poor experiences with LLMs due to bad prompting skills. I would need to see your chats with it in order to know if you're telling the truth.

There is no easy way to share. I copied them in google docs: https://docs.google.com/document/d/1GidKFVgySgLUGlcDSnNMfMIu...

One with ChatGPT about dbms questions and one with claude about socket programming.

Looking back are some questions a little stupid ? Yes. But affcourse they are! I am coming with zero knowledge trying to learn how the socket programming is happening here ? Which functions are begin pulled from which header files, etc.

In the end I just followed along with a random youtube video. When you say, you can get LLM to do anything, I agree. Now that I know how socket programming is happening, for next question in assignment about writing code for crc with socket programming, I asked it to generate code for socket programming, made the necessary changes, asked it generate seperate function for crc, integrated it manually and voila, assignment done.

But this is the execution phase, when I have the domain knowledge. During learning when the user asks stupid questions and the LLM's answer keep getting stupider, then using them is not practical.


I had no idea what even the question was. I had chatgpt (4o) explain it to me, and solve it. I now know what candidate keys are, and that the question asks for AB and BC. I'd share the link, but chatgpt doesn't support sharing logs with images.

So you did not convince me that LLMs are not working (on the contrary), but I did learn something today! thanks for that.


is english not your first language?

Also Im surprised you even got a usable answer from your first question asking for a socket program if all you asked was the bold part. I'm a human (pretty sure at least) and had no idea how to answer the first bold question.


No, english is my second language.

I had already established from previous chat that upon asking for server.c file, llm's answer was working correctly. Rest of the sentence is just me asking it to use and not use certain header files which it uses by default when you ask it to generate server.c file.Thats because from docs of <sys/socket.h>, I thought it had all relevant bindings for the socket programming to work correctly.

I would say, the sentence logically makes sense.


The simpler explanation is that LLMs are not very good.

I can get an LLM to do almost anything I want. Sometimes I need to add a lot of context. Sometimes I need to completely rewrite the prompt after realizing I wasn't communicating clearly. I almost always have to ask it to explain it's reasoning. You can't treat an LLM like a computer. You have to treat it like a weird brain.

You're not exactly selling it as a learning tool with this comment.

If the premise is that you first need to learn an alien psychology, that's quite the barrier for a student.


I was talking about coding in this context. With coding, you need to communicate a lot better than if you're just asking it to explain a concept.

The point is, your position is against a inherent characteristic of LLMs.

LLMs hallucinate.

That's true and by how they are made it cannot be false.

Anything they generate cannot bw trusted and have to be verified.

They are good at generating fluff but i wouldn't rely on them for anything.

Ask at that temperature glass melts and you will get 5 different answers, noone true.



The problem with these answers is that they are right but misleading in a way.

Glass is not a pure element so that temperature is the "production temperature" but as an amorphous material it ""melts"" in the way a plastic material ""melts"" and can be worked at temperature as low as 5-700c.

I feel like without a specification the answer is wrong by omission.

What "melts" means when you are not working with a pure element is pretty messy.

This came out in a discussion for a project with a friend too obsessed with GPT (we needed that second temperature and i was "this can't be right....it's too high")


Yes. This is funny when I know what is happening and I can "guide" the LLM to the right answer. I feel that is the only correct way to use LLMs and it is very productive. However, for learning, I don't know how anyone can rely on them when we know this happens.

I mean. Likely, yes, but if you have to spend the time to prompt correctly, I'd rather just spend that time learning the material I actually want to learn

I've been programming for 20 years and mostly JS for the last 10 years. Right now, I'm learning Go. I wrote a simple CLI tool to get data from several servers. Asked GPT-4o to generate some code, which worked fine at first. Then I asked it to rewrite the code with channels to make it async and it contained at least one major bug.

I don't dismiss it as completely useless, because it pointed me in the correct direction a couple times, but you have to double-check everything. In a way, it might help me learn stuff, because I have to read its output critically. From my perspective, the success rate is a bit above 0, but it's nowhere close to "magical" at all.


Care to share any of these chats?

Our internal metrics show a decrease in productivity when more inexperienced developers use AI, and an increase when experienced developers with 10+ years use it. We see a decrease in code quality across experience levels which needs to be rectified across all experience levels but even with the time spent refactoring it’s still an increase in productivity. I think I should note that we don’t use these metrics for employee review in any way. The reason we have them is because they come with the DORA (eu regulation) compliance tool we use to monitor code-quality. They won’t be used for employee measurement while I work here. I don’t manage people, but I was brought in to help IT transition from startup to enterprise so set the direction with management confidence.

I’m a little worried about developers turning to LLMs instead of official documentation as the first thing they do. I still view LLMs as mostly being fancy auto-complete with some automation capabilities. I don’t think they are very good at teaching you things. Maybe they are better than Google programming, but the disadvantage LLMs have seem to be that our employees tend to trust the LLMs more than they would trust what they found on Google. I don’t see an issue with people using LLMs on fields they aren’t too experienced with yet however. We’ve already seen people start using different models to refine their answers, we’ve also seen an increase in internal libraries and automation in place of external tools. Which is what we want, again because we’re under some heavy EU regulations where even “safe” external dependencies are a bureaucratic nightmare.

I really do wonder what it’ll do to general education though. Seeing how terrible and great these tools can be from a field I’m an expert in.


> But there is something to be said about atrophy. If you don't use it, you lose it.

YMMV, but I didn't ride a bike for 10ish years, and then got back on and I was happily riding quickly after. I also use zsh and ctrl+r for every Linux command, but I can still come up with the command of I need to, just, slowly. Ive overall found that if I learn a thing, it's learnt. Stuff I didn't learn in university, but passed anyways, like Jacobians, I still don't know, but I've got the gyst of it. I do keep getting better and better at the banjo the less I play it, and getting back to the drumming plateau is quick.

Maybe the drumming plateau is the thing? You can quickly get back to similar skill levels after not doing the thing in a while, but it's very hard to move that plateau upwards


Dont you see the survivorship bias in your thinking?

You learnt the bike and practiced it rigorously before stoppping for 10 years, and you're able to pick it up. You _knew_ the commands because you learned the them the manual/hard way, and then used assistance to to do it for you..

Now, do you think it will apply to someone who begins their journey with LLMs and doesnt quite develop the skill of "Does this even look right?!", and says to themselves "if LLMs could write this module why bother learning what that thing actually does?" and then get bitten by it due to LLM hallucinations and stare like a deer in headlights.


How long are your progress reports? Mine are a one sentence message like "Hey, we've got a fix for the user profile bug, but we can't deploy it for an hour minimum because we've found an edge case to do with where the account signed up from" and I'm not sure where the AI comes in.

AI comes to write it 10x longer so you pretend you worked a lot and think the reader can't realise your report is just meaningless words because you never read anything yourself.

I could probably copy it straight from my commits log if my team is amenable to bullet point format.

I keep mine pretty short and have been writing them up bullet-style in org-mode for like 15 years. I can scan back over the entire year when I need to deal with my annual review and I don't think I spend more than 5 minutes on this in any given week. Converting from my notes to something I would deliver to someone else might take a few minutes of formatting since I tend to write in full sentences as-is. I can't imagine turning to an AI tool for this shit.

> I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?

Already do — that's what "brain training" apps; consumer EdTech like Duolingo and Brilliant; and educational YouTube and podcasts like 3blue1brown, Easy German, ElectroBOOM, Overly Sarcastic Productions all are.


AFAIK, none of these teach person, how to research a topic.

Neither did University, for me.

I just liked learning as far back as my memories go, so I've been jumping into researching topics because I wanted to know things.

Universe just gave me course materials, homework, and a lot of free time.

Jobs took up a lot of time, but didn't make learning meaninfgully different than it always has been for me.


I used to have dozens of phone numbers memorized. Once I got a cell phone I forgot everyone's number. I don't even know the phone number of my own mother.

I don't want to lose my ability to think. I don't want to become intellectually dependent on AI in the slightest.

I've been programming for over a decade without AI and I don't suddenly need it now.


It's more complicated than that—this trade-off between using a tool to extend our capabilities and developing our own muscles is as old as history. See the dialog between Theuth and Thamus about writing. Writing does have the effects that Socrates warned about, but it's also been an unequivocal net positive for humanity in general and for most humans in particular. For one thing, it's why we have a record of the debate about the merits of writing.

> O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

https://www.gutenberg.org/files/1636/1636-h/1636-h.htm


TIL: Another instance of history repeating itself.

Interesting perspective. I read your first line about phone numbers as a fantastic thing -- people used to have to memorize multiple 10 digit phone numbers, now you can think about your contacts' names and relationships.

But... I think you were actually bemoaning the shift from numbers to names as a loss?


Have you not run into trouble when your phone is dead but you have to contact someone? I have, it's frustrating. Thankfully I remember my partners number, though its the only one these days.

These things are not mutually exclusive. Remembering numbers didn't hinder our ability to remember our contacts' names.

We don't know how brain exactly works, but I don't think we can now do some things better just because we are not using another function of our brains anymore.


(not op) for me its a matter of dependency. great, as long as i have my phone I can just ask siri to call my sister, but if I need to use someone else's phone because mines lost or dead, well, how am I going to do that?

Same as AI. Cool it makes you 5x as efficient at your job. But after a decade of using it, can you got back to 1x efficiency without it? Or are you just making the highly optimistic leap that you will retain access to the tech in perpetuity.


I still remember the numbers I used as a kid.

Anyway now as an adult I have to remember a lot of pin codes.

* Door to home * Door to office * Door to gf's place * Bank card #1 * Bank card #2 * Bank card #3 * Phone #1 * Phone #2


I'm curious what your exposure to the available tools has been so far.

Which, if any, have you used?

Did you give them a fair shot on the off-chance that they aid you in getting orders of magnitude more work done than you did previously while still leveraging the experience you've gained?


Well, sure. I can remember phone numbers from 30+ years ago approximately instantly.

I don't have to remember most of them from today, so I simply don't. (I do keep a few current numbers squirreled away in my little pea brain that will help me get rolling again, but I'll probably only ever need to actually use those memories if I ever fall out of the sky and onto a desert island that happens to have a payphone with a bucket of change next to it.)

On a daily, non-outlier basis, I'm no worse for not generally remembering phone numbers. I might even be better off today than I was decades ago, by no longer having to spend the brainpower required for programming new phone numbers into it.

I mean: I grew up reading paper road maps and [usually] helping my dad plan and navigate on road trips. The map pocket in the door of that old Chevrolet was stuffed with folded maps of different areas of the US.

But about the time I started taking my own solo road trips, things like the [OG] MapBlast! website started calculating and charting driving directions that could be printed. This made route planning a lot faster and easier.

Later, we got to where we are today with GPS navigation that has live updates for traffic and road conditions using systems like Waze. This has almost completely eliminated the chores of route planning and remembering directions (and alternate routes) from my life, and while I do have exactly one road map in my car that I do keep updated I haven't actually ever used it for anything since 2008 or so.

And am I less of a person today than I was back when paper maps were the order of the day? No, I don't think that I am -- in fact, I think these kinds of tools have made me much more capable than I ever was.

We call things like this "progress."

I do not yearn for the days before LLM any more than I yearn for the days before the cotton gin or the slide rule or Stack Overflow.


well true to your name you are reducing it to a boolean dilemma

When is the Butlerian Jihad scheduled?

This is much better. Love it when code follows the KISS principle without unnecessary abstractions or optimizations when its not needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: