Hacker Newsnew | past | comments | ask | show | jobs | submit | LocalPCGuy's commentslogin

There is always a "better mousetrap", and there are those that continue to use the old one because they "know how it works and it's set up just the way I like it". And there are others that try every new mousetrap that hits the market. (and that's ok, not slighting either one)

I will say that I personally have never really gelled with VSCode no matter how much I try to customize it, it still is just a bit off. For me, it's like it's too much to be a simple editor like SublimeText or NeoVim, but not quite enough to be an IDE like IntelliJ or Visual Studio (full). It does just enough that I expect a bit more of it and it often fails to deliver. Right now I tend to just use 2 editors - one very simple one for viewing/editing text files and one IDE (currently IntelliJ) for coding in a project.

On topic - Zed is actually a really nice editor. It had some rough edges last time I tried it, but it's probably about time to give it another go.


This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws.

The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.

> Is it safe to say that LLMs are, in essence, making us "dumber"?

> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

> Additional vocabulary to avoid using when talking about the paper

> In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".

1. https://www.brainonllm.com/

2. https://www.brainonllm.com/faq


Yeah I feel like HN is being Reddit-ified with the amount of reposted clickbait that keeps making the front page :(

This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise.

I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate.


Ironically you’re now replicating the reddified response to this paper by attacking the sample size.

The sample size is fine. It’s small, yes, but normal for psychological research which is hard to do at scale.

And the difference between groups is so large that the noise would have to be at unheard levels to taint the finding.


Yup, I even found myself a bit hopeful that maybe it was a follow-up or new study and we'd get either more or at least different information. But that bit of hope is also an example of my bias/sympathy to that idea that it might be harmful.

It should be ok to just say "we don't know yet, we're looking into that", but that isn't the world we live in.


Ironically there should be another study of how not using AI is also leading to cognitive decline on Reddit. On programming subreddits people have lost all sense of engineering and have simply become religious about being against a tool.

>I feel like HN is being Reddit-ified

It's september and september never ends


yea it's clear no one is actually reading the paper. the study showed the group who used LLMs for the first three sessions then had to do session 4 without them had lower brain connectivity than was recorded for session 3 with all the groups showing some kind of increase from one session to the next. Importantly, this group's brain connectivity didn't reset to the session 1 levels, but somewhere in-between. They were still learning and getting better at the essay writing task. In session 4 they effectively had part of the brain network they were using for the task taken away, so obviously there's a dip in performance. None of this says anyone got dumber. The philosophical concept of the Extended Mind is key here.

imo the most interesting result is that the brains of the group that had done sessions 1-3 without the search engine or LLM aids lit up like christmas trees in session 4 when they were given LLMs to use, and that's what the paper's conclusions really focus on.


> Is it safe to say that LLMs are, in essence, making us "dumber"?

> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

Maybe it's not safe so far, but it has been my experience using chatGPT for eight months to code. My brain is getting slower and slower, and that study makes a hell of a sense to me.

And i don't think that we will see new studies on this subject, because those in lead of society as a whole don't want negative press towards AI.


You are referencing your own personal experience, and while that is an entirely valid opinion for you to have personally about your usage, it's not possible to extrapolate that across an entire population of people. Whether or not you're doing that, part of the point I was making was how people who "think it makes sense" will often then not critically analyze something because it already agrees with their preconceived notion. Super common, I'm just calling it out cause we can all do better.

All we can say right now is "we don't really know how it affects our brains", and we won't until we get some studies (which is what the underlying paper was calling for, more research).

Personally I do think we'll get more studies, but the quality is the question for me - it's really hard to do a study right when by the time it's done, there's been 2 new generations of LLMs released making the study data potentially obsolete. So researchers are going to be tempted to go faster, use less people, be less rigid overall, which in turn may make for bad results.


The point is, you don't need to play with extra features and customizations if you don't want to, so you can keep it "a step above a text file". That said, having those additional features is nice when you want just a little bit more, or you want to link a note file with your todo file, etc.


FWIW, most browsers by default now do a viewport zoom with Ctrl/Cmd-+ rather than a font-scaling zoom. I think browsers generally have the option to change that, so if you prefer the former but it's doing the latter, may check the browser settings.


I don't begrudge them their ability to make a living, but it is a bit ironic that I can't read an article about reading without a subscription or archival service. I get that isn't really the point of the article, but I do think that news/magazines inability to find a way to successfully move beyond the heavily subsidized advertising-supported model (and so the current clunky experience in general) cannot help inspire more people to read. Not claiming it actively reduces readers as a whole, just that it's one less avenue for increasing the desire to do so.


I would even say you could go with if duplicated words is an issue:

  You can [get the Amaya Browser] from the download page


Nice refinement there.


> ...accessibility issue? particularly when there's are buttons right above it that say...

Yes, those buttons may not be "in context" when the page is not being viewed in a visual medium.

> To download PiPedal, click here.

Another appropriate link in this case could be simply:

  *Download PiPedal* now!
Or like your last example, just link it slightly differently to emphasize the action:

  To *download PiPedal*, visit the Download Page.


We (probably) can guess the why - tracking and data opportunities which companies can eventually sell or utilize for profit is some way.


Both of these are basically strawman arguments - there are legitimate, non-tribal reasons to be against the actions taken re: tariffs and the purported anti-corruption tasks. For example, a person can be strongly against government corruption but also be strongly against the current efforts/methods being used for a multitude of reasons. And similar for tariffs. (Not having those debates here, just pointing out that I don't believe those examples hold up.)


As someone with 15+ years of experience, a lot of that FE specific, that is the advice I always give newer devs if asked. Learn the fundamentals of Javascript, HTML, CSS (it's like a 3 legged stool, even if the JS leg is oversized in the days of web apps). If you know how to program, and you know the fundamentals, you can work in whatever framework is thrown your way.

Now, practically speaking, that's actually probably better advice for someone with a job and 1-2 years in. To get an initial foothold in the industry, people often need to specialize in one specific thing (React at the moment most likely), in order to be able to demonstrate enough competence to get that initial job and so I understand how fundamentals can be backburnered initially. But I recommend devs don't let that initial success lock them into that framework - that's the time to get back and learn all the fundamentals, go wide, learn a couple other frameworks even so it's easy to compare and contrast the strengths/weaknesses of each.

And you will want to be well-versed in the framework you currently use day to day, knowing best practices, architecture patterns that work and those to avoid, etc. Knowing the fundamentals will help, but there will be framework specific things that will change from framework to framework, even code-base to code-base sometimes. So it's always going to be a bit of a balance. But long-term, IMO, being well-versed in the fundamentals affords you the most flexibility and employ-ability long-term.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: