Hacker News new | past | comments | ask | show | jobs | submit login

I think this study does not say what most people are taking it to say.

> our research questions - when and how knowledge workers perceive the enaction of critical thinking when using GenAI (RQ1), and when and why do knowledge workers perceive increased/decreased effort for critical thinking due to GenAI (RQ2)

This is about the application of critical thinking to AI outputs and in AI workflows. Are people cognitively lazy when some other entity hands them plausible sounding answers?

The answer of course is yes. If some entity gives you a good enough result, probably you aren’t going to spend much time improving it unless there is a good reason to do so. Likewise you probably aren’t going to spend a lot of time researching something that AI tells you if it sounds plausible. This is certainly a weakness, but it’s a general weakness in human cognition, and has little to do with AI in and of itself.

In my reading, what this study does not say, and does not set out to answer, is whether or not the use of AI makes people generally less able or likely to engage in critical thinking as a result of use of AI.






On your last point I tend to think it will. Tools replaced our ancestor's ability to make things by hand. Transportation / elevators reduced the average fitness level to walk long distances or climb stairs. Pocket calculators made the general population less able to do complex math. Spelling/grammar checks have reduced knowing how to spell or form complete proper sentences. Keyboards and email are making handwriting a passing skill. Video is reducing our need / desire to read or absorb long form content.

The highest percentage of humans will take the easiest path provided. And while most of the above we just consider improvements to daily life, efficiencies, it has also fundamentally changed on average what we are capable of and what skills we learn (especially during formative years). If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.

However, most of the above, it can be argued, are just tools that don't impact our actual thought processes; thinking remained our skill. Now the tools are starting to "think", or at least appear like they do on a level indistinguishable to the average person. If the box in my hand can tell me what 4367 x 2231 is and the capital of Guam, why then wouldn't I rely on it when it starts writing up full content for me? Because the average human adapts to the lowest required skill set I do worry that providing a device in our hands that "thinks" is going to reduce our learned ability to rationally process and check what it puts out, just like I've lost the ability to check if my calculator is lying to me. And not to get all dystopian here... but what if then, what that tool is telling me is true, is, for whatever reason, not.

(and yes, I ran this through a spell checker because I'm a part of the problem above... and it found words I thought I could still spell, and I'm 55)


> good enough result (…) sounds plausible

It’s paramount to not conflate the two. LLM answers are almost always the latter with no guarantee of being the former. That is a tremendous flaw with real consequences.

> it’s a general weakness in human cognition, and has little to do with AI in and of itself.

A distinction without merit. Like blaming people for addictive behaviour while simultaneously exploiting human psychology to sell them the very same addiction.

https://www.youtube.com/watch?v=EJT0NMYHeGw

This “technically correct” approach to arguing where the fault lies precludes productive discussion on fixing the problem. You cannot in good faith shout from the rooftops LLMs are going to solve all our problems and then excuse its numerous failures with “but we added a note you should always verify the answer. It’s down in the cellar, with no lights and stairs, in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’”.

It has become norm for people to “settle” disputes by quoting or showing a screenshot of an LLM answer. Often wrong. It is irrelevant to argue people should do better; they don’t, and that’s the reality you have to address.


The study is basically just... a bunch of self-reports on how people feel about AI usage in their daily job. That's it. The "critical thinking" part is completely self-reported.

It's not "how good a person is at critical thinking." It's "how much a person feels they engages in critical thinking." Keyword: feels.

It's like all the nutrition studies where they ask people to self-report what they eat.


There is a growing number of companys and agencies looking at banning AI, whatchamcallit, due to the increasing number of issues around eronious stuff causing liability, of the scary kind.

What is bieng passed off as journalism does nothing to give confidence in "AI" or its minders.

And behind the sceens it is easy to imagine that actual conciensous humans biengs are out competed by there "enhanced" co-workers, and just walking out to seek better compensation, or at least find work in a profesional environment, sorry, sorry, an environment that suits there "legacy" skill set.


No control group, sample is too small, self selected...Researchers should be ashamed and their institutions too...

Was pretty valuable for me to read. I don’t like using llms much for coding because the shift is from creating a solution to verify a solution. This paper helped articulate that. Plus it’s still useful data if you understand what the data is

[flagged]


The negative karma with no explanation definitely weakens the HN discussion rather than deepening the investigation of a topic.

I beg to differ. AI made it possible for human to pursue critical thinking. Overwhelmed by the basic facts and routine works, limited by bandwidth and 8 hours a day, we hardly have the luxury to think above and beyond. That's when you hire consulting firms to stuff the content, the ocean of information, the type of work now potentially suitable for AI.

It is time for human to move up the game and focus on the critical thinking, even only the critical thinking while the AI is still unable to perform the critical thinking. Eventually there is the hope that AI would be able to handle the critical thinking, but it remains a hope at the current state of art.


It really has been a sight watching the loudest anti-AI people flog this around, then turn to rage when you clarify the range of the actual conclusion.

Generative AI got a lot more useful when I started seeing it abstractly. I put data in, it finds correlations in its neural network, and produces different data. I have to be intentional in interpreting that data to figure out what to do with it.

Once I started thinking in terms of "this word will kick up a hornet's nest in that branch of the network and make the output useless. Let's find another word that gets closer to what I'm aiming for," the outputs got so much better. And getting to the good outputs took fewer iterations.


> Generative AI got a lot more useful when I started seeing it abstractly. I put data in, it finds correlations in its neural network, and produces different data. I have to be intentional in interpreting that data to figure out what to do with it.

In my opinion this is a mischaracterization just as you stated others have "raged" [0] about. The simple question for you is: how do you know how to interpret? When precision and/or depth has no critical bearing I agree with your sentiment. However shades of grey appear in the simplest of prompts, often, quick. People who do not already have the skill to "interpret" the data, as you stated, can (and probably will) assume it is correct. That end user is also not constantly reminded of the age of the underlying data the model was trained on, nor are they aware how an LLM foundationally works or if it is reasoning or not - amongst many other unknowns.

Yes, while I feel as though the Microsoft report can have an air of "yes, that's the condition we expect" you're also not considering other, very important, inputs to that trivial response. Read the paper in the context of middle and high school students and now how does the "rage" feel? Are you a parent on a school board seeing this happen first hand?

Not everyone has the analytical pedigree of people like yourself and the easy access to LLMs is pissing people off as they watch a generation being robbed via the easy (and oft wrong) button.

[0] "It really has been a sight watching the loudest anti-AI people flog this around, then turn to rage when you clarify the range of the actual conclusion."


edit: I mistook soapboxing for sincere interest in discussion. Please disregard.

> What's the unique angle on this that puts it in the same genre as the worries over new technology intersecting with ancient human ills that have vexed philosophers since the introduction of writing?

So you've responded and not addressed any of the concern outlined, instead providing personal justification and musings but not engaging in the actual question. The cherry on top is your personal exchange at the end. Awesome.


I assumed your misread and mischaracterization wasn't with ill intent since I could have worded that better, ignored it aside from the clarifying edit, and focused on the bit that was interesting.

It was my opinion, as stated. Your response was not part of the conversation but instead stream of thought with no relation to my response to you. Given this needed to be explained I'm not too surprised you were blocked. Enjoy!

Can you offer a few examples of the kinds of work/projects/tasks you've used ai for?

Guish, a bi-directional CLI/GUI for constructing and executing Unix pipelines: https://github.com/williamcotton/guish

WebDSL, fast C-based pipeline-driven DSL for building web apps with SQL, Lua and jq: https://github.com/williamcotton/webdsl

Search Input Query, a search input query parser and React component: https://github.com/williamcotton/search-input-query


Thanks for clarifying, based on the title I totally thought the study was about critical thinking in general.

For the record, I’m not going to read the article to verify your statement.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: