Hacker News new | past | comments | ask | show | jobs | submit login
The Impact of Generative AI on Critical Thinking [pdf] (microsoft.com)
129 points by nosianu 5 days ago | hide | past | favorite | 99 comments





I think this study does not say what most people are taking it to say.

> our research questions - when and how knowledge workers perceive the enaction of critical thinking when using GenAI (RQ1), and when and why do knowledge workers perceive increased/decreased effort for critical thinking due to GenAI (RQ2)

This is about the application of critical thinking to AI outputs and in AI workflows. Are people cognitively lazy when some other entity hands them plausible sounding answers?

The answer of course is yes. If some entity gives you a good enough result, probably you aren’t going to spend much time improving it unless there is a good reason to do so. Likewise you probably aren’t going to spend a lot of time researching something that AI tells you if it sounds plausible. This is certainly a weakness, but it’s a general weakness in human cognition, and has little to do with AI in and of itself.

In my reading, what this study does not say, and does not set out to answer, is whether or not the use of AI makes people generally less able or likely to engage in critical thinking as a result of use of AI.


On your last point I tend to think it will. Tools replaced our ancestor's ability to make things by hand. Transportation / elevators reduced the average fitness level to walk long distances or climb stairs. Pocket calculators made the general population less able to do complex math. Spelling/grammar checks have reduced knowing how to spell or form complete proper sentences. Keyboards and email are making handwriting a passing skill. Video is reducing our need / desire to read or absorb long form content.

The highest percentage of humans will take the easiest path provided. And while most of the above we just consider improvements to daily life, efficiencies, it has also fundamentally changed on average what we are capable of and what skills we learn (especially during formative years). If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.

However, most of the above, it can be argued, are just tools that don't impact our actual thought processes; thinking remained our skill. Now the tools are starting to "think", or at least appear like they do on a level indistinguishable to the average person. If the box in my hand can tell me what 4367 x 2231 is and the capital of Guam, why then wouldn't I rely on it when it starts writing up full content for me? Because the average human adapts to the lowest required skill set I do worry that providing a device in our hands that "thinks" is going to reduce our learned ability to rationally process and check what it puts out, just like I've lost the ability to check if my calculator is lying to me. And not to get all dystopian here... but what if then, what that tool is telling me is true, is, for whatever reason, not.

(and yes, I ran this through a spell checker because I'm a part of the problem above... and it found words I thought I could still spell, and I'm 55)


> good enough result (…) sounds plausible

It’s paramount to not conflate the two. LLM answers are almost always the latter with no guarantee of being the former. That is a tremendous flaw with real consequences.

> it’s a general weakness in human cognition, and has little to do with AI in and of itself.

A distinction without merit. Like blaming people for addictive behaviour while simultaneously exploiting human psychology to sell them the very same addiction.

https://www.youtube.com/watch?v=EJT0NMYHeGw

This “technically correct” approach to arguing where the fault lies precludes productive discussion on fixing the problem. You cannot in good faith shout from the rooftops LLMs are going to solve all our problems and then excuse its numerous failures with “but we added a note you should always verify the answer. It’s down in the cellar, with no lights and stairs, in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’”.

It has become norm for people to “settle” disputes by quoting or showing a screenshot of an LLM answer. Often wrong. It is irrelevant to argue people should do better; they don’t, and that’s the reality you have to address.


The study is basically just... a bunch of self-reports on how people feel about AI usage in their daily job. That's it. The "critical thinking" part is completely self-reported.

It's not "how good a person is at critical thinking." It's "how much a person feels they engages in critical thinking." Keyword: feels.

It's like all the nutrition studies where they ask people to self-report what they eat.


There is a growing number of companys and agencies looking at banning AI, whatchamcallit, due to the increasing number of issues around eronious stuff causing liability, of the scary kind.

What is bieng passed off as journalism does nothing to give confidence in "AI" or its minders.

And behind the sceens it is easy to imagine that actual conciensous humans biengs are out competed by there "enhanced" co-workers, and just walking out to seek better compensation, or at least find work in a profesional environment, sorry, sorry, an environment that suits there "legacy" skill set.


No control group, sample is too small, self selected...Researchers should be ashamed and their institutions too...

Was pretty valuable for me to read. I don’t like using llms much for coding because the shift is from creating a solution to verify a solution. This paper helped articulate that. Plus it’s still useful data if you understand what the data is

[flagged]


The negative karma with no explanation definitely weakens the HN discussion rather than deepening the investigation of a topic.

I beg to differ. AI made it possible for human to pursue critical thinking. Overwhelmed by the basic facts and routine works, limited by bandwidth and 8 hours a day, we hardly have the luxury to think above and beyond. That's when you hire consulting firms to stuff the content, the ocean of information, the type of work now potentially suitable for AI.

It is time for human to move up the game and focus on the critical thinking, even only the critical thinking while the AI is still unable to perform the critical thinking. Eventually there is the hope that AI would be able to handle the critical thinking, but it remains a hope at the current state of art.


It really has been a sight watching the loudest anti-AI people flog this around, then turn to rage when you clarify the range of the actual conclusion.

Generative AI got a lot more useful when I started seeing it abstractly. I put data in, it finds correlations in its neural network, and produces different data. I have to be intentional in interpreting that data to figure out what to do with it.

Once I started thinking in terms of "this word will kick up a hornet's nest in that branch of the network and make the output useless. Let's find another word that gets closer to what I'm aiming for," the outputs got so much better. And getting to the good outputs took fewer iterations.


> Generative AI got a lot more useful when I started seeing it abstractly. I put data in, it finds correlations in its neural network, and produces different data. I have to be intentional in interpreting that data to figure out what to do with it.

In my opinion this is a mischaracterization just as you stated others have "raged" [0] about. The simple question for you is: how do you know how to interpret? When precision and/or depth has no critical bearing I agree with your sentiment. However shades of grey appear in the simplest of prompts, often, quick. People who do not already have the skill to "interpret" the data, as you stated, can (and probably will) assume it is correct. That end user is also not constantly reminded of the age of the underlying data the model was trained on, nor are they aware how an LLM foundationally works or if it is reasoning or not - amongst many other unknowns.

Yes, while I feel as though the Microsoft report can have an air of "yes, that's the condition we expect" you're also not considering other, very important, inputs to that trivial response. Read the paper in the context of middle and high school students and now how does the "rage" feel? Are you a parent on a school board seeing this happen first hand?

Not everyone has the analytical pedigree of people like yourself and the easy access to LLMs is pissing people off as they watch a generation being robbed via the easy (and oft wrong) button.

[0] "It really has been a sight watching the loudest anti-AI people flog this around, then turn to rage when you clarify the range of the actual conclusion."


edit: I mistook soapboxing for sincere interest in discussion. Please disregard.

> What's the unique angle on this that puts it in the same genre as the worries over new technology intersecting with ancient human ills that have vexed philosophers since the introduction of writing?

So you've responded and not addressed any of the concern outlined, instead providing personal justification and musings but not engaging in the actual question. The cherry on top is your personal exchange at the end. Awesome.


I assumed your misread and mischaracterization wasn't with ill intent since I could have worded that better, ignored it aside from the clarifying edit, and focused on the bit that was interesting.

It was my opinion, as stated. Your response was not part of the conversation but instead stream of thought with no relation to my response to you. Given this needed to be explained I'm not too surprised you were blocked. Enjoy!

Can you offer a few examples of the kinds of work/projects/tasks you've used ai for?

Guish, a bi-directional CLI/GUI for constructing and executing Unix pipelines: https://github.com/williamcotton/guish

WebDSL, fast C-based pipeline-driven DSL for building web apps with SQL, Lua and jq: https://github.com/williamcotton/webdsl

Search Input Query, a search input query parser and React component: https://github.com/williamcotton/search-input-query


Thanks for clarifying, based on the title I totally thought the study was about critical thinking in general.

For the record, I’m not going to read the article to verify your statement.


Study (PDF): https://www.microsoft.com/en-us/research/uploads/prod/2025/0...

I did not link to it directly because the PDFs title - "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers" - was way too long for the HN title length limit.

Abstract (from the PDF):

> The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices.

> We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks.

> Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks.

> Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.


> Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources.

The researchers appear to be saying that a lot of people perceive LLM outputs to be superior to whatever they could generate themselves, so users accept the LLM's logical premises uncritically and move on. Sort of a "drive by" use of chatbots. The researchers are not wrong, but intuitively I don't think this is a fair critique of most LLM tasks, or even most users.

One of the most compelling LLM use cases is individualized learning or tutoring, and LLM's can support a much deeper dive into arcane scientific or cutting-edge topics than traditional methods. I don't see anything here that suggests the researchers balanced these scenarios against one another.


> One of the most compelling LLM use cases is individualized learning or tutoring, and LLM's can support a much deeper dive into arcane scientific or cutting-edge topics

When you deep dive with llm, everything needs to be verified. LLM could hallucinate or provide incorrect information and you won't even know unless you look it up. Last thing a learner needs is incorrect information presented as facts while learning.


The key thing about this paper is the invalidity of the measures. They are essentially looking at the correlation between measures of frequency of AI usage and measures of critical thinking skills.

Yet, in the questions themselves, questions like “I always trust the output of AI” (paraphrasing) are in the measures of frequency—and questions like “I question the intentions of the information provided by AI” (which is not a reasonable question, if you use AI regularly) is in the measures of critical thinking.

Sorry I don’t have time now to share the actual text. Take a look yourself though, at the end of the paper.


If a study comes out which demonstrates to high confidence that some extremely horrible outcome is certain with this technology, would any changes be made? Would we tell OpenAI to close shop? I don't think, with the way our society and politics is set up, we would be able to deal with this. There's too much money, ego, and general inertia at play to change course.

Climate change is already a pretty good precedent on this

That's too broad though, because there been specific issues we've (humanity) managed to come together to fix, like the "ozone hole" stuff being "solved" with regulations, protocols and public policy, while there are other issues we haven't quite been able to do the same with, all under the umbrella of "climate change".

If a study comes out, no. But I do not think the trajectory of LLMs is as determined as you may think

I mean, that question is really a question for yourself. You cannot control others, so assume others will do whatever is quickest, easiest and/or least resistance, outcomes be damned.

Of course nothing would immediately do a 180, but if there would be research put out that your brain slowly stops working to more you use LLMs, then you can at least make your own choice from there on out.


That's a brilliant strategy to reach ASI, lowering the bar on humans.

Interesting, in my experience LLMs hallucinate so much on stuffs I know about that I instinctively challenge most of their assumptions and outputs, and found out that this kind of dialectic exchange bring the most out of the "relationship" so to speak, co creating something greater than the isolation of us two.

Relevant 2018 essay by Nicky Case «How To Become A Centaur»: https://jods.mitpress.mit.edu/pub/issue3-case/release/6


I haven't used LLMs a lot and have just experimented with them in terms of coding.

But about a year ago, I had a job to clean up a bunch of, let's call them, reference architectures. I mostly didn't mess with the actual architecture part or went directly to the original authors.

But there wasn't a lot of context setting and background for a lot of them. None of it was rocket science; I could have written myself. But the LLM I used, Bard at the time, gave me a pretty good v 0.9 for some introductory paragraphs. Nothing revelatory but probably saved me an hour or two per architecture even including my time to fixup the results. Generally, nothing was absolutely wrong but some I felt was less relevant and other stuff I felt was missing.


> in my experience LLMs hallucinate so much on stuffs I know about that I instinctively challenge most of their assumptions and outputs

In my experience most people don’t do that. Therein lies the problem.


As I expected. When I got a cell phone I forgot everyone's phone numbers, and I'm afraid if I use AI I'll forget how to think.

It probably also depends on whether you trust the AI. I don't, at all. So, I'm often critically thinking about whatever nonsense it spews out.

I wonder a lot why people use generative AI in the first place.

I’ve found in my work that I don’t really have anything to ask an AI to help me with that the generative AIs available to me are particularly good at. I distrust them, but I also haven’t really found a use for them. Which isn’t to say that such generative AIs don’t exist; I just don’t have access to those particular ones due to the nature of my work, and so haven’t tried them.

I don’t really have a use for them personally either. I think the closest I come is voice assistants, which I use so sparingly I wouldn’t miss if they were gone.


"You are not immune to Brain Rot"

Don't exercise your body, become fat.

Don't exercise your brain, become stupid.

LLMs give us another excuse not to exercise our brains.


A warning on this topic, in short story form: http://web.archive.org/web/20121008025245/http://squid314.li...

(By Scott Alexander)


I've thought about this myself and try to cultivate a mindset of distrust towards the LLM. It's hard because the way it is set up is just so convincing that you regularly need to remind yourself that it might be bullshitting.

Really the user interface should be like "here's a system where you can enter some text and it outputs statistically generated text that might be useful to you". Of course that won't happen because the charade (the conversational interface) is part of its success.


Any chance the link can be changed to the original paper (https://www.microsoft.com/en-us/research/uploads/prod/2025/0...) and the title changed to "The Impact of Generative AI on Critical Thinking"?


I have this horrible feeling that in the future people are going to walk around with AR glasses and some AI is just doing to tell people what to do all the time. They will become meat robots to do the bidding of some hallucinating model which has been finetuned with an agenda.

This is already how society works, you just aren't wearing the glasses.

Childhood education, entertainment, social networks are all constantly trying to guide our actions. Agendas are everywhere.

AI might actually be more objective or transparent, depending on how it's implemented. The key will be choice - you can either choose not to use it, or you can choose which models you prefer.

Unfortunately, most people have little choice in their society.


Many people have already abandoned their mental autonomy to whatever delusions they receive from whatever social network they are on.

I suppose the LLM could be worse since it will more directly promote someone's agenda. I just question how much worse it could possibly be.


Those with critical thinking skills already realize this.

Isn't MS heavily involved in AIBS?


I doubt they mind if their customers lack critical thinking skills

Unfortunately nor their employees either, if their products and some occasional comments by them on here are any indication.

They saw how much Apple was making from their customers.

I want them to do the same study with managers. And maybe executives.

Does relying on employees kill the same critical thinking skills? Or is this a skill that has to be cultivated like every other skill?


From my experience it does. A lot of VPs and higher have no direct experience with the things they are responsible for so can be easily manipulated by smooth talkers. I just attended a strategy meeting where a lot of higher ranks asked questions that clearly showed they have no idea.

personal related example: my 12 year old son is a terrible speller; whenever I've tried to encourage him to improve in it, he's like "why do I need to know how to spell, I just use grammarly".

And so we raise a next generation who grow up with AI and see no more value in learning a host of knowledge skills.


The notion of a correct spelling is a relatively new innovation. I can't come up with a defense even for a conventional spelling that doesn't lean on "people will try to shame you for it," and that doesn't seem like a great reason to do things that add cognitive load for no real benefit.

Written communication depends on correct spelling and grammar to a certain extent. Grammarly can only help you so much.

If you send an email that is rife with spelling or grammatical mistakes, especially basic ones, your interlocutors are not likely to take you seriously.

I agree with the study participants. Using LLMs makes me feel less capable at some cognitive tasks, but it's hard to say whether I've actually declined or if use of AI has dispelled some of the Dunning-Kruger effect for me around my own skills.

For instance, LLMs are excellent at rewriting text in a specific style/tone. I used to believe I was quite good at that, but LLMs always do better. Now I no longer believe I am quite good. Did I become worse at text synthesis, or did I simply become more aware of my skills and limitations?

This is emblematic of a broader issue with self-reported data. It'd be good to measure critical thinking skills against earlier benchmarks for a clearer picture.

It's also important to not only focus on one skill, because while AI might make certain areas of our mind atrophy, others might now be more engaged. Just like when pocket calculators became popular and we all got worse at mental arithmetic but much better at applied mathematics overall. High level programming languages have made software engineers worse at comp-sci, but much better at applying it; commodification of cars made the average driver much less capable of understanding car mechanics, but much better at driving; and so on. My thesis is that human brainpower is not generally reduced by technological innovation, but changed.

One thing's for sure - cognitive changes in society (and especially how they relate to technology) is an area I'd like to see more research in. I think there are a few discoveries to be made there.


That's a great point about how using the models may actually just be revealing our own incompetence, not causing it.

Maybe it's a quantity over quality issue. What if we're actually doing more of the thing - or at least we're guiding the process - than we would have if LLMs weren't available, which enables us to become better at recognizing how LLMs are actually beneficial for the task.

I remember the story of the two groups of pottery students, one tasked with making as many pots as possible, and the other with making one perfect pot. At the end of the day, the quantity group's pots were also higher quality than the quality group's single pot, simply because they had far more practice.

I know I code more with LLMs than I did before.


“Just like when pocket calculators became popular and we all got worse at mental arithmetic but much better at applied mathematics overall. High level programming languages have made software engineers worse at comp-sci, but much better at applying it; commodification of cars made the average driver much less capable of understanding car mechanics, but much better at driving; and so on.”

I don’t agree with any of these conclusions. I guess it’s debatable of course.


Why not?

Review code or waste time thinking and coding? I guess that's the question.

I can either spent 20 minutes arguing with an LLM to get something that maybe does most of what I want, or spend those 25 minutes to get something n that does exactly what I want.

I wouldn’t call those 5 minutes times wasted


Or you could learn to use an LLM for what is it good at so that it actually saves you time (arguing with an LLM for a long time a sure sign of not having a lot of experience with this tool..)

A turn of phrase ;-) I spent thousands of hours working with closely with LLMs. It took me a lot longer than it should have to realize they were slowing me down, not speeding me up.

I’ll ask the off question here and there, they can be good search engines, but realistically if my time with the LLM can be categorized as “reviewing code,” it would be a waste of my time.


Fair enough, I guess as with all tools YMMV and things depend on the kind of work. I also have days where I have no use for them.

200 IQ: Waste time rewriting AI slop.

Does that mean that being a manager also kills critical thinking?

I want a study on the impact of generative AI on knowledge. I fear that AI hallucinations have the potential to destroy all knowledge. For now, when an AI hallucinates, I can still check an established source to see if the fact is correct. But what happens when most established sources are themselves AI generated? There will almost be no ability to verify any fact. The very foundation of society collapses and any shared knowledge. Oh wait, that already happened with flat earthers, Covid and conspiracy theories. But AI may be the final nail in the coffin for Western civilization.

i’m surprised they published this with how much they are invested in ai…

Why? It wouldn’t surprise me at all if critical thinking is overapplied in most roles or poorly executed.

There are many use cases where people not following a process and thinking they know better is an issue.


The critical thinking part of me loves seeing takes like this, which immediately rev it up and get it to wonder whether its immediate reaction is really correct. Thank you.

These sound like ideal roles for bots/automation

The Greeks found that relying on writing kills memory.

People are worried AI is making us dumber. You hear it all the time. GPS wrecked our sense of direction. Spellcheck killed spelling. Now it’s AI’s turn to supposedly rot our brains.

It’s the same old story. New tool comes along, people freak out about what we’re “losing.” But they’re missing the point. It’s never about losing skills, it’s about shifting them. And usually, the shift is upwards.

Take GPS. Yeah, okay, maybe you can’t navigate with a paper map anymore. So what? Navigation isn’t about memorizing street names. It’s about getting from A to B. GPS makes that way easier, for way more people. Suddenly, everyone can explore, find their way around unfamiliar places without stress. Is that “dumber”? No, it’s just… better navigation. We optimized for the outcome, not the parlor trick of knowing all the streets by heart.

Same with the printing press. Before that, memory was king. Stories, knowledge – all in your head. Then books came along, and the hand-wringing started. “We’ll stop memorizing! Our minds will get soft!” Except, that’s not what happened. Books didn’t make us dumber. They democratized knowledge. Freed up our brains from rote memorization to actually think, analyze, create. We shifted from being walking libraries to… well, to being able to use libraries. Again, better.

Now it’s AI and coding. The worry is, AI code assistants will make us worse programmers. Maybe we won’t memorize syntax as well. Maybe we’ll lean on AI to fill in the boilerplate. Fine. So what if we do?

Programming isn’t about remembering every function name in some library. It’s about solving problems with code. And AI? Right now, it’s a tool to solve problems faster, more efficiently. To use it well in its current form, you need to be better at the important parts of programming:

- Problem Definition: You have to be crystal clear about what you want to build. Vague prompts, vague code. AI kind of forces you to think precisely.

- System Design: AI can write code snippets. As of right now, designing a whole system? That’s still on you. And that’s the hard part, the valuable part.

- Testing and Debugging: AI isn’t magic. At least, not yet. You still need to test, validate, and fix its output. Critical thinking, still essential.

So, yeah, maybe some brain scans will show changes. Brains are plastic. Use a muscle less, it changes. Use a new one more, it grows. Expected. But if someone’s scoring lower on some old-school coding test because they rely on AI, ask yourself: are they actually worse at building software? Or are they just working smarter? Faster? More effectively with the tools available today?

This isn’t about “dumbing down.” It’s about cognitive specialization. We’re offloading the stuff machines are good at – rote tasks, memorization, syntax drudgery – so we can focus on what humans are actually good at: abstraction, creativity, problem-solving at a higher level.

Don’t get caught up in nostalgia for obsolete skills. Focus on the outcome. Are we building better things? Are we solving harder problems? Are we moving faster in this current technological landscape? If the answer is yes, then maybe “dumber” isn’t the right word. Maybe it’s just... evolved. And who knows what’s next?

https://tulio.org/blog/dumber-no-different/


After watching [Oxford Researchers Discover How to Use AI to Learn Like a Genius](https://youtu.be/TPLPpz6dD3A?si=FJJ-S6wz0PPrJuSn) a few days ago, I've been using ChatGPT in "reverse mode" a lot. I give it a excerpt of a text I'm reading and ask it to ask me questions from it at different levels of detail.

I have to say it feels like a superpower! The answers to questions that you needed to supply really stick on your memory as do the links that spontaneously form to bodies of knowledge you already know when answering deeper level questions.

I'm thinking that LLMs might actually address some of Plato's complaints against reading and writing:

> You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.

See [here](https://fs.blog/an-old-argument-against-writing/).


This is exactly the use case I’ve been thinking about, so thank you for linking this video.

What I want is for my ereader to feed the text I just read into a good LLM and then quiz me on what I read.

What’s kind of funny is that I hated homework as a kid, now I’m basically begging for a computer to give me some


Yes, agreed.

It would be good if some spaced repetition was thrown into the mix as well.


This assumes you always have reliable GPS:

- areas that lack map details and obstructions that preclude straight line paths, e.g. deep forest

- wartime GPS blocking

- device failure incl battery

- errors in mapping data

- maps that contain other information not found in your GPS-enabled map


Like it or not, you live in an industrial society that depends on a million things you can not possibly recplicate or repair if they fail.

My mom used these arguments against pocket calculators when she taught high school math. What if the battery dies?

Now calculators are solar powered, or built into your...

What if your phone battery dies?


So then carry a map or ask for directions when these situations arise? What does this have to do with the positive trade-offs when using GPS?

Knowing which direction you are facing is an obsolete skill?

I don’t think you’ve been paying attention.

You’ve been watching too many Ted talks or something


What do you do with that knowledge? I mean, it's a good skill if you're actually going to apply it to something. Knowing you're facing north is great if there's some use you get out out that. Finding your way around is way more work than just "north is that way".

I remember the first time I traveled internationally. Incredible stress. I had a bunch of printouts with all the details. Still, I depended completely on the friend I was visiting. I literally didn't dare leave his house without him because I wasn't confident I could make it back without having to have him rescue me and didn't want to give him that trouble. Sure I had a bunch of info, but one wrong decision and my static stack of papers might not be enough to get me out of it.

Technology made an absolutely amazing difference. With GPS I could wander around a city aimlessly and still find a way to my hotel. I could figure out where the center was. The early incarnation was rough but amazing for the amount of stress relief it provided.

And modern tech? Just sci-fi magic. I can see both the usual sights and find various obscure ones and any business I might need. With Uber I could get a trip in random countries wherever needed without speaking the local language. Google now tells me about bus and tram routes, tells me what to take, where the station is, and what stations am I going to go through, and when I'm going to be there. There's a magic real time translator for both text and voice.


> Sure I had a bunch of info, but one wrong decision and my static stack of papers might not be enough to get me out of it.

If you knew how to read a map it would have been.

I can't imagine being so dependent on a phone which can be accidentally dropped, misplaced, or simply run out of battery charge that I would be lost without it.


Paper has limited information. If you failed to acquire a map with the relevant useful information you have a problem. Not every map contains enough information for every possible need. If you planned on going by car then improvised and took a bus you might not even have bus stops marked on it.

> I can't imagine being so dependent on a phone which can be accidentally dropped, misplaced, or simply run out of battery charge that I would be lost without it.

Same way you deal with anything else: what if you have a car problem? So you plan ahead. Get the car checked before a trip, fill the tank, figure out what to do if it does break.

Phones are easier. I've got a stack of old ones that are still functional, easy to bring an extra one. I have an external battery. You can charge in many cafes and similar, just find a Starbucks or something. You can go to a shop and buy a battery or the cheapest phone they have if it comes to that.


…Did you just chastise me for being able to orient myself?

AI can program, but not engineer. Even then, you eventually reach a point in the project where it is too complex for AI to even do snippets; especially if you are pioneering something new that has never been done before.

A sprinter is unlikely to win a marathon, and that is what using AI to program is like. By the time you have to take over, you have a huge learning curve ahead of you as you can lean on the AI less and less.

If you're doing something boring/boiler-plate, yeah, AI is helpful I guess.


Most people with "engineer" titles spend relatively little of their time on actual quantitative engineering or "higher level" thinking. A lot of their work involves manual information processing: Organizing and arranging things, fitting things together, troubleshooting. This could be justified for a couple of reasons: Maybe a lot of the stuff that was "engineering" is now handled by the CAD software. That's great. But also, the efficiency of those tools has raised the complexity of systems to the point where the interaction between parts consumes most of the engineers' attention.

Managers also spend most of their time on the same things, but handling different kinds of information.

But CAD hasn't changed the immutable laws of engineering, such as Brooks's Law. When I hear about the wonders of AI transforming engineers into higher level thinkers, my snarky response is: "Does this mean that projects will finish on time?"


If your engineers (software or otherwise) aren’t spending a lot of time engineering, then you’ve got a hiring problem. Most jobs I’ve worked as a software engineer are 90% engineering (soft and hard skills) and only about 10% programming. With AI, it becomes about 60% engineering and 20% babysitting an AI, and 30% programming because the AI got it wrong.

Now, we can’t even hand this stuff off to juniors and teach them things they’ll hopefully remember. Instead, I have to explain to an AI, for the 60th time, that it has hallucination problems.

Personally, I’d rather have the juniors back.


> AI can program, but not engineer.

I feel like that's what the OP said. People can focus on the engineering part and not memorizing syntax or function names.

Too often I see people thinking in very binary terms, and we see it here again. AI does everything or nothing. I just keep thinking it'll be in between and people who are good at leveraging every tool at their disposal will reap the largest benefits.


You don't need AI if that's all you're using it for. In fact, IDEs have been doing a fine job at that for years.

It feels right now, that much of the time, AI is a solution looking for a problem to solve.

I find it more useful to treat AI like an easier to search stack overflow. You can ask it to go find you an answer, and then elaborate when it's not the right one.


> People are worried AI is making us dumber.

I'm far from an "LLM Defender" but I've heard plenty of people say "well Google said.." for at least a decade.

I think LLMs accelerate this, but we're not in totally unfamiliar territory here.



This is dead on. I'm not even a big AI fan, but this is a key idea about technology in general. I don't want to have to bring to mind the laws of physics every time I drive to work. The whole point is that a group of engineers encoded them into the machine for me, and now I enhance my capabilities without needing to know how. It's what the classic Alfred North Whitehead quote is talking about. I understand the impulse towards mastery and ever-expanding knowledge—who doesn't love the idea that they should be able to "plan an invasion, change a diaper, butcher a hog"—but the truth is we have finite capabilities we are capable of mastering in a lifetime. This is why even literal geniuses often fail when they step outside their field of expertise. It's a valid concern that as a society some skill will be lost, or concentrated in the hands off too few, but losing skills and knowledge (or I would simply call it "being permitted to forget") is in general fine. Now if AI literally killed people's ability to think, that would be one thing. But what I suspect is that like parent is saying, it allows you to turn off your brain for certain tasks, like every technology. Then the question is what more complex tasks can we do on top of the automatic and thoughtless ones.

EDIT: I see some good replies to parent about stability/reliability, alienation etc. There are definitely tradeoffs to the power you get from technology, and it's worth acknowledging those. But that's exactly the framework we should be thinking in. What are the tradeoffs involved? Often these kinds of stories are one-sided arguments that imply losing skills is straightforwardly bad, when in truth it's more complicated than that.


> so we can focus on what humans are actually good at

You know what humans are good at? Deluding ourselves. Because that's what you're doing. Using vague, feel-good words, based on vague analogies, no proof, to keep the inconvenient truth at bay. Not being able to navigate with a paper map is a big thing: people get lost inside buildings without a map. Next time the power fails, half of the GenZ will be lost. Not being able to write with pen and paper is a big thing. Not being able to add a few number is not as big a thing, but it certainly can be a problem. And what for? So you don't have to feel bad about using AI tools?

You know what comes next? Everything based on audio and video. Are you then going to argue: reading is an obsolete skill?


GPS still works when power fails.

Until your phone battery runs out.

> It’s the same old story. New tool comes along, people freak out about what we’re “losing.” But they’re missing the point. It’s never about losing skills, it’s about shifting them. And usually, the shift is upwards.

Except for that widespread feeling of hopelessness, alienation, powerlessness, lack of motivation, and lack of ambition. Almost as if not learning any human skills, and relying solely on technology for everything might have some second-order effects.


I've thought the same way you did, how people are resistant to change, but eventually it's better for everybody.

I do believe that GPS made people worse drivers. It made it so people lost sense of direction and distance. It has removed all critical thinking on the road. Plenty of stories where people drive over stairs because the GPS told them so.

From a driver's ability to navigate, I don't think you can do more now with GPS than you can before with a map. It surely has made it easier, but at a significant cost.

Now, of course, there are plenty of benefits, such as reducing time when getting somewhere unknown (e.g. ambulances), planes not flying over hostile territory (mostly), ability to be able to tell someone where you are when there no landmarks around etc.

But the reality is that overall, a mistake of a GPS is usually rather localized, and the cost of the mistake is rather low.

Books are interesting, instead of memorizing details we now memorize where to find information, little bits that help us get to the solution of the problem we're solving. But books themselves haven't replaced memory, otherwise no-one would read them anymore ahead of time.

When we search something on the internet we are thought to apply critical thinking. What are the sources? [0]. But GPS? Just go with it.

And AI is more like GPS than it is like books. We are being taught take it at face value, and to abandon critical thinking for the sake of speed. Worse yet, because of the enormous financial investments of companies, there is an incentive to lie about how useable it is.

I'm not even talking about context windows. I'm talking about the endless minutia of languages, frameworks, and changes related to specific versions that you only learn by doing. Just the same way a resident does not become a doctor until they finish residency. They have to have done the work, and applied critical thinking.

Software Engineering does not have such legal requirements, but we all learn on the job. AI, and the companies pushing for it basically tell potential clients that this is no longer needed. Would you want a gallbladder surgery done by someone who just read a Wikipedia page about it?

Now, a seasoned developer who writes a crystal clear prompt will probably pick up on bugs, and tell the AI that they want edge cases A, B and C considered. But how did they learn that those exist? Right. By hitting the issues.

Something that happens a lot in Software Engineer, due to the massive amount of things out there and no fixed specs/docs/etcs, is that your approach changes when you're developing a solution for a problem. But the need for those changes only become apparent when you're writing and testing code.

You literally cannot front-load that into your prompt. Yet, reading the news here, we see that our future is writing prompts for a much lower wage. This is orthogonal as to why I went into Software Engineering. Prompts rob me of the ability to express something in an extremely well defined language. Clarity of rules. A syntax where you can express something without ambiguity [1].

You don't know what you don't know, meaning you can't prompt for what you don't know. Hence why they brought back a whole bunch of people out of retirement to build new manpads.

[0] Interestingly when I was growing up a book quote was ok, but Wikipedia was not, even though it came from a book. That now definitely has changed.

[1] A wife sends her programmer husband to the grocery store for a loaf of bread... On his way out she says "and if they have eggs, get a dozen". The programmer husband returns home with 12 loaves of bread....


A good programmer husband would’ve asked “a dozen of what?”. A poor programmer assumes they understand the statement much like an LLM would do.

No shit Sherlock

> Thus, knowledge workers with higher levels of trust in GenAI — generally or for specific tasks — perceive engaging in critical thinking activities to be less effortful. A possible explanation, supplemented with our qualitative analysis in RQ1 (see Section 4.3.2), is that trust and reliance on GenAI inhibit the enaction of critical thinking, i.e., users underinvest in critical thinking when using GenAI.

There is a reduction in certain kinds of effort when using any kind of tool.

Another possible explanation is that using LLMs, much like using a pry bar for leverage with physical energy, provides leverage for cognitive energies. Does the person using a pry bar become less physically fit than if they were removing floorboards by hand or do they just get more work done during the day?

I don't think there is a problem in exploring the trade-offs for using any kind of tool or technology but the discussion needs to be balanced.

Personally, I use LLMs but I'd say I have a critical understanding of their limitations and when it is better to start using another tool, like a book, to learn the skills required to use the tools to achieve the tasks at hand. I will say that using a tool that provides leverage for cognitive energies allows for the cognition to be applied to other parts of the process, typically higher order.

For example, I spend much more time thinking about the overall architecture of the memory model in C applications.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: