Hacker News new | past | comments | ask | show | jobs | submit login
Teachers: AI is making children dumb as fuck (reddit.com)
48 points by Onavo 5 months ago | hide | past | favorite | 77 comments



I imagine teachers said similar things when word processors became a thing. And calculators. And spellchecking. And the internet. And wikipedia. And smartphones.

Was it Plato who didn't like writing because it reduced students' abilities to memorize stuff?

Is there any actual evidence to assume that children are currently "dumb as fuck", or that this is caused by "AI"?


When word processors became popular, educators lamented over the loss of handwriting skills. This became generally true. Many people have terrible handwriting, since, as it happens, if you don't practice something, you lose the ability to do that thing over time.

When spellchecking became popular, educators were afraid of students losing the ability to spell. This also became generally true. How many people do you know depend on spellcheck to write an email?

When the Internet and Wikipedia became popular, educators were afraid of people being unable to do their own research. This became generally true. Many students (afaik) still turn in Google search results and Wikipedia articles as sources, and mis/dis-information is a massive problem that's been hyper-accelerated by the rise of low-information, high-volume social media.

When smartphones became popular, educators were afraid of students getting sucked into them during class. We all know what happened here.

These technologies have certainly made our world better, but let's not forget that real skills were lost in each evolutionary step.

As for "actual evidence" of AI causing educational regression, honestly, talk to teachers. ALL of them have stories on stories on how AI is short-circuiting critical thinking skills.


Is this a problem of teachers lacking teaching skills? We're not getting rid of word processing, spellchecking, phones, calculators, etc. It's up to the teacher (or rather the collective of educators as a whole) to come up with ways of incorporating these tools or working around the challenges they present.

A teacher might be successful in banning some technology from their classroom, but they'll fail at banning it from the lives of their students, or from the world at large.

I have a friend who's an English teacher. She has her students write papers with ChatGPT at home, and has them critique those papers in the classroom. Seems like a much more constructive attitude than saying "AI is making children dumb as fuck" on Reddit.


That's great that your friend is doing this.

Ask them about how the student body has changed over time.

I can almost guarantee you that they will wax poetic about how difficult it is to get their kids off of their phones. Or how curriculum in public schools is slowly but surely being dictated by whatever parents feel is important instead of what's actually important. Or how failing kids is way more difficult than passing them, even if they totally deserved those marks. Or how taking phones away isn't feasible anymore because the amount of blowback their admin, and, thus, them, will get over it far outweighs the benefits.

Most teachers out there are extremely qualified to do their jobs. They just aren't given the tools or the environment to do that in most cases.


ALL of them have stories on stories on how AI is short-circuiting critical thinking skills.

My experience of teachers/lecturers is that they usually can't give a precise definition of what they mean by critical thinking skills, explain how their curriculum helps develop them or explain why this is an important skill to begin with. You'd think people who claim to be good at critical thinking could knock this out of the park!

Usually you just get a sort of semi-circular answer, in which the skill of writing essays that please the profs is defined as skill in critical thinking. If you ask for what specifically they look for, or a course in critical thinking that's specific to critical thinking and independent of their specific subject, they get all huffy. Of course you can't develop such skills without also memorizing lots of critical theory, or archaeological dig sites in Turkey, or whatever.

> Many students (afaik) still turn in Google search results and Wikipedia articles as sources

The alternative is what, academic papers that aren't written for outsiders to learn from at all? Wiki articles cite sources just like academic papers do, and both can be written by anyone. The prohibition against citing Wikipedia never made much sense and people genuinely skilled in critical thinking would challenge it ;)


I can't understand why anyone would argue for being allowed to cite Wikipedia.

It's not uncommon to find claims there that have no citation at all, so it'd have to be required to note if the part you're using is cited... and if you can do that what's preventing you from just using the actual source?


Why wouldn't you be able to cite Wikipedia, as long as you specify the date on which you read it? The assumption that it's unreliable doesn't seem to be rooted in any genuine data, especially as I remember teachers and lecturers asserting that immediately the moment it appeared, based on nothing more than "but anyone can edit it!". One might think that was a bit early to leap to conclusions about quality.

It looks a lot like an attempt to preserve the academic culture of all claims being attached to names for reputation and career building reasons.

> It's not uncommon to find claims there that have no citation at all

Obviously. Any claim eventually has to bottom out at either personal experience, or citation of someone else's words. If you just follow the citations to the end you'll end up at a paper that just asserts something without a citation, and that's fine. Academics assume such statements must be reliable because of the institutional affiliation of the authors, but that's hardly a strong basis. Wikipedia does at least have a working system for fixing mistakes that isn't "spend two years arguing with a journal editor to get nothing more than an expression of concern at the top".


There's various levels of rigour. For casual conversation any source is usually enough, even Wikipedia or a Reddit comment. And sometimes anything short of independently verifying is not enough. The question is, where on this spectrum do elementary/high school essays fall?


That's a different question to what I was replying to, but if they do require citations then it seems like a poor idea to teach anyone to rely on this slight shortcut.


Anecdotally, no one on my computer science course is doing the work. You could say AI is a learning tool generously, and it does help some people, but it seems to have replaced a lot of their actual abilities to write stuff. Have you really learned something if they only way you can go about it is passing questions to ChatGPT? At that point the AI is doing the work and you are just a data entry person. I should add that this approach doesn't scale well. For instance, there are apps that let you take a picture of an equation and get a worked solution. In the short term, their ability to solve equations seems to have improved, but I don't think they'll learn as much doing it that way.

I have been a part of several group project, and by this point almost all I've seen anyone else do is use ChatGPT. If we need to do some work and write a report on it, the work won't get done and Mr. GPT takes care of the report. I feel crazy for not using it. I'm not sure how accurate this is, but I remember hearing that their usage metrics drop by something like half over the summer holidays. The scale of this is so great.

I think the worst part is that it's isolating. Perhaps before you would have asked a tutor or a lecturer or a friend to help you with something, but not its just GPT. There are lab sessions occasionally where a professor will show up and give guidance on the content, and I remember a good few sessions where it would be me and one other guy there out of 100+ people. I would guess everyone else just asks the machine.


Just as an opposing anecdote, when I was doing CS at uni the group project, which was randomly allocated, was just as bad and this was 20 years ago. Only one other person in my group could coherently program and he was shit. So I did all the code. It was isolating in that situation too. Gpt might not be a factor, it's just the flavour of the moment.

I was a bad student because I didn't need to try. Those few of my peers who did try and were good just blew it out the park because the competition was abysmal. This was a uni in the UK with a fairly well respected CS department at the time.


Your anecdote actually makes the AI future look worse and your experience much better.

Setting aside the absolute quality of work and learning, in your situation, the people who tried did a lot better. There were actual incentives to trying and learning because you were rewarded with better grades and then going forward likely a better job and better career and life.

OTOH, the commenter you’re replying to, is suggesting the opposite. As someone who’s not using AI and are actually putting in the effort, they’re not being rewarded for it to the point they’re considering whether they’re the stupid ones for trying to learn in a learning institution.

The incentives are completely backwards.


Now imagine that those other people decide to contribute by "writing" the report that you're going to get graded on, with no understanding of code and not doing it themselves. Imagine that some people who are semi-competent at programming decide to contribute likewise. It takes them from do-nothing layabouts to active hindrances. Much worse than someone not knowing anything is someone pretending to know something, and I thank my lucky stars that none of them tried to write the code with AI.

> This was a uni in the UK with a fairly well respected CS department at the time.

I feel this sentence.


So-called "professors" are just slow to adapt. University is a huge waste of money currently. ChatGPT is a blessing.


What's the opposite of the slippery slope fallacy, where we convince ourselves that there isn't a slope at all? I agree that there has been doomsayers with every technological advance, but surely there can be a point where a piece of technology truly does remove the need for people to think to a problematic degree.


> What's the opposite of the slippery slope fallacy, where we convince ourselves that there isn't a slope at all?

Sounds like healthy skepticism to me. Assume nothing has changed until proven otherwise.


Assume things do not represent a discontinuous change until proven otherwise.

AI has changed some things, and will change some more. Pretending otherwise isn't healthy skepticism, it's hiding your head in the sand.

The real question is, which things does it change, and how much? Don't assume a discontinuous change without enough evidence, but there is enough evidence that something has changed.


> there is enough evidence that something has changed

Of course, something has changed - every invention changes something, that is almost a tautology. However, is there any evidence that the change is negative and drastic enough to warrant my attention and time of day? I think not.


My mother and I were just discussing how dumb people have gotten in just the past few years. It’s like they don’t think any deeper than a meme - no nuance is considered. You easily see it in the political realm. The only thing we can come up with is social media being the cause. I remember seeing a video (on Reddit) of a teacher teaching his class, and 100% of the students were looking at phones instead of listening. Say what you want about the quality of the teacher, students, etc, but that level of disengagement didn’t happen before social media.


At least in the political realm, memes have worked for a very long time - at least more than a century, maybe even millennia. An example: "Ma, ma, where's my pa? Gone to the White House. Ha ha ha." Another: "Peace. Bread. Land."

So I would say that there seems to have always been a segment of the population on whom political memes were effective - probably more effective than longer discourse.

Now, you could argue that more people are in that camp today. I can't argue with that; I don't have any data one way or the other. But I would at least suggest the alternate possibility that it's more visible today that people are in that camp.


I went to school before smartphones. Back then we used to stare out of the window, throw paper aeroplanes, whisper talk to friends or - mostly - just sit there being insanely bored and checked out. There was never a time when teachers all held their students in rapt attention.

At least with the phones those students might well be learning something, or at least getting some reading practice. Even in the most pessimal case it's not useless. When I was at school there were still teachers whose entire teaching methodology was writing out notes and diagrams on a rolling blackboard, and we copied them onto paper. Literally just human photocopiers! You think we were engaged? No chance. I remember about three facts from years of being in those classes, and those facts are useless. Even scrolling Instagram would have been 10x more educational!


I understand the sentiment, but I have a hard time seeing any educational value is 99% of anything I’ve ever seen on tiktok. There definitely were bad teaches, and I assume still are. But the window and paper airplanes weren’t tuned to suck you in for hours and hold your attention. I still think things are legitimately worse today.


Then forbid those specific apps, not the device itself.


If technology has truly removed a need to think in a certain capacity, then surely it has just expanded our skills enough that we don't need to be so personally skilled in that capacity. We still teach kids basic math, although I do nearly every daily arithmetic with a calculator these days. The dream of technology is that we will be able to let go of skills that we used to see as essential. That's success, not failure.


I think I agree that AI will be a net benefit, but in world where it is difficult to have a meaningful conversation with a human because that human is so used to talking to a chatbot or someone doesn't have the skills to research a problem they're facing because they've always had a chatbot to ask ... I would argue that something of value has been lost.


Being unable to have a meaningful conversation sounds like general technological skepticism to me. I don't see any reason to think talking to chatbots will erode our conversation skills, anymore than the internet or TV or books has in the past. People interact in the real world much in the same way they did 100 years ago. As for having the skills to research a problem, is chatGPT much different than the adoption of search engines? Many adults probably couldn't find information in a library very easily these days, but if they have no need for this skill anyway, I don't see that as a loss.


I dont think your analogy is true internet or TV. You dont talk with the TV so when watching you use other set of skills. The same with internet.

But with multi channel multi modal AI you can have conversations. If you do that a lot you might get it that you dont gaveto be polite, or say sorry or even admit your own mistakes. The current AI does not care about those. But people (the current ones) do care.

Not saying this change is bad or good, but I also dont think there will be no change in how we interact with each other.


It’s not quite a fallacy name, but I’ve heard that called boiling a frog: https://en.m.wikipedia.org/wiki/Boiling_frog


Those things arguably did reduce intelligence. It's frustrating how often people present this point (on varying topics) in a completely standalone fashion. I.e. they say "yeah yeah, people had X complaint at Y historical time too", implying those historical people were wrong because...? They were in the past? The world hasn't ended? Technological progress continues? Usually none of that is evidence to the contrary. In this case, none of that implies that people in developed societies aren't becoming dumber over the centuries. I think people sometimes make the assumption that knowing more is a synonym for intelligence.


> Those things arguably did reduce intelligence.

Citation needed. They may have reduced the competency level in certain skills (e.g. writing in cursive), but I doubt that those effects carry over to "reducing intelligence".

People in developed societies *aren't* becoming dumber over the centuries. We get better at dealing with problems we frequently encounter, and worse at dealing with problems we rarely encounter. Is this a problem? I don't know.


Think it'll be hard to find solid evidence at this point, tbh.

There's a huge difference between having quick access to your local canon, vs the entirety of human output. Writing probably had a -5 effect on social bonding in your local community, whereas AI may comparatively be a -100 on having to think or solve problems at all.

We've seen how social media has created cults and hive minds, AI is probably going to set us on a path where every single thing we do is standardised and optimised. Why spend 1000x time developing a solution when the entirety of human thought has contributed to achieving it in x time.

It's a reality though, and we have to deal with it. We need a huge refactoring of the education system which takes into account the realities of the modern world, as opposed to grinding people out to work as bureaucrats in bigcorp.

Using AI as a tool to 'get to the next level' should be a big focus of this. Collectively asking as humanity, what can humans actually do which current AI can't, and how can we leverage new tech to move forward. Once we've answered this, we should then plan courses around this.

Ideally this'd end up with doing more work with our hands, getting outside and dirty, doing work on-site, taking part in mock events, massive role playing games, building shit with tight deadlines and requirements, etc.

Bookwork and exercises are done - old hat. They're all solved. The Ghost in the Shell series was hugely influential for me, and it's quite incredible how accurate it's turning out to be. Maybe such topics would benefit from debating matters in future/dystopian worlds from fiction, so we can make sense of what's in store.


"the entirety of human output" is doing a lot of heavy-lifting.

Niche topics? Existing LLMs are crap.

Things people think it is actually knowledgeable about? Also crap.

Glue on your pizza? Recommending the user to kill themselves? The code output is equally tragic, but most people are bad devs, so can't see the shortcomings.

You're talking about the "next level" for humans—it's a next-word prediction model. It can't reason or do useful things. Overhyped. This is the same nonsensical rhetoric people here spouted over crypto.


As a teacher with no programming and compsci background I've developed an entire interactive site which currently uses 50k database records; mapping grammar structures to questions to classical art to interactive quizzes to videos to collocations, etc, etc. It's effectively a Wikipedia of the English language with everything arranged into interactive lessons. 449 Svelte/js files in so far.

ChatGPT helped me achieve this from scratch. The code is probably shite, as I'm chasing my own tail understanding what's happening, but I can only imagine that there's others out there using '''the entirety of human knowledge''' to fast-forward their development and realise their dreams.

Of course, at the very top and bleeding edge, current systems aren't very helpful. And this is where we should be placing our curricula; treating school like a microcosm of elite society.


Why should “niche topics” even exist anymore?

I mean, seriously. The entire internet economy right now is all about companies trying their hardest to make sure you spend all your time on their platforms.

It’s evident at this point that all of them believe AI is the future. So if niche topics exist where the immediate gratification of AI is not suitable, companies are not gonna sit around doing nothing. Either they will try and expand their offerings to cover those niche topics, or they will try and eliminate interest in those niche topics because they represent competition and therefore a threat to their businesses.


> Why should “niche topics” even exist anymore?

I don't care what the "internet economy" wants. People are interested in niche things. If the internet economy won't help them on those things, they'll find something that will.


The calculator example is a good one. Prohibited or repressed for a couple of decades at best because it would "make people unable to do math", as if people wanted to do math by hand.


When word processors became a thing in the 1980s, my teachers loved it. They didn't have to read everyone's scrawly handwriting.


My experience was a bit different. We were forced to use fountain pens, and if your handwriting wasn't good enough, you'd just be punished until it improved. Needless to say, several decades later my handwriting is still awful.


I always find that whenever such topics come up for discussion Ted Gray had the best take - https://theodoregray.com/BrainRot/.

With AI and specifically LLMs, I feel many of the points raised here about 25 years ago still stands. Any LLM based learning isn't meant to push students into thinking about the problem or even guiding students along the path to the correct answer. It's mostly about returning some answer -- mostly right but not really of a good quality. As pointed out in Ted Gray's article, any assisted learning mechanism should work towards making the student think about the problem at hand and reason their way through it. It's an aide to the human teacher because, it being a machine and most importantly software, can adapt to each and every single student's ability and work at the student's pace instead of 3 periods of an hour each per week. An LLM by definition is the opposite. It gives you the next most probable character without any idea of what it's saying. There is a lot of work being done on this ofc, but what's available today isn't a reasoning aide/teacher.


Thanks a lot for posting this. I agree that many of the points still hold, and as a (small business) author of educational software, reinforce my ideas about what that software should look like, especially concerning open-endedness and "capable of doing senseless things if asked".

I do think that the arguments about violent games either seem a bit dated or simply don't hold at all beyond operant conditioning. Especially in Europe, children or adults are hardly in the position that such 'reptilian brain' conditioning matters.


I'm struck by how the description of the more ideal case is just, well, mostly montessori style education. But a bit cheaper.

One wonders by how much we'd have to increase the budgets of the various school systems to have mostly montessori style schools and not have to even worry if we'd like better schools.


Aside from the fact that the title of TFA is unnecessarily crude and likely overstates the case, perhaps the mode of evaluating students can adapt to the new reality of living with LLM assisted academic work. At certain points in my academic career, oral examinations were part of the evaluative package. Perhaps making that a standard part of courses would help. Of course, subjectivity enters the equation but if the rubric is defined, some of that risk can be mitigated.


> It’s not just AI, it’s the internet in general.

> My students (7th grade) have a difficult time answering a personal prompt like “Who was the last person you had an argument with? Why did you argue and how did you resolve the argument?”

> They immediately jump on Google like it’s an answer they can find on the internet.

Well, I'm horrified. Is the IPad baby to AI schooling pipeline the final iteration of the behavioural sink?


The OP is a teacher seeing their student's abilities in formulating their reasoning less well than before.

It's reasonable to think that getting an assignment done for you and simply reviewing it, is less likely to educate, than actually forcing them to do the work.

Basically, the hypothesis is that assignments are a Chinese box assuming the ability to produce (and validate) a good response imply knowledge and ability. That's an interesting take.


In a way the statement is true but incomplete.

Mechanised farming makes our farmers physically "weaker". Having a mobile phone make me forget phone numbers that once were memorised.

Like with all innovations, we offload some skills to the machine, and discover new skills the human needs to operate the machine.


What skill are we offloading to the AI?


As it stands right now, Bullshit, mostly.


Hmmm, that's, well, not that bad of thing to offload, right?


ehhh there is a lot of spam and propaganda that AI can do really well. not an offload thing, a "shouldn't be happening at all" sort of thing


Unfortunately, I really do think that's part of the problem.

AI generates bullshit. School writing assignments are "bullshit" in the sense that nobody wants to read the result. The goal is not to produce something novel, insightful, or interesting. The goal is to prove that you absorbed the material and were able to formulate a coherent response based on it.

The hope is that the student will eventually be able to write something that isn't bullshit. The AI will never produce anything other than bullshit, at least without a significant technological revolution. The AI can nonetheless deliver bullshit better than a child, despite the child having potential to eventually do better.


>The AI can nonetheless deliver bullshit better than a child, despite the child having potential to eventually do better.

Honestly, I always felt slinging bullshit was a super useful skill. I'm pretty good at it and synthesizing on the spot from practice. When the big models first hit and I saw how much better at bullshit they were than I was, it really made me feel uncanny valley uncomfortable. Their output had the same truthy+confident timbre that I could write, but it drew from an unfathomably wider shallow pool than I could no matter how much I've read and could tenuously link together.

I wish this wasn't the case. I also wish bullshit wasn't a useful skill and more people were inoculated against it. I'm just waiting for the new machine gun phishing wave to spring up where enough PII is available from hacks, and for-sale datasets to craft very persuasive, targeted phishing attacks at scale.


AI hasnt been around long enough to have this effect.

Something else is making them 'dumb as fuck'...

Could it be parents and teachers responsible??


>AI hasnt been around long enough to have this effect

Doesn't have to be much. A year or a couple of years of it being around, is enough to see results in the kids in middle school and above.


> Something else is making them 'dumb as fuck'...

Covid.

Covid and sitting at home for ~2 years doing 'nothing'.


Part of me wonders if this will make older employees more desirable- if lots of new grads are completely incompetent and incapable of learning, perhaps a selection bias may emerge in favor of older applicants


I think this is probably somewhat already the case due to smartphones. Younger generations that grew up with a smartphone as their first computing device have a lot less tech literacy than old generations that started with a desktop computer with no internet connection.


what a failure of imagination by teachers and parents. there’s never been a better time to be a student of any age. now more than ever, everyone is a student forever, no matter your age.

gpt4 is the best educational tool the world has ever seen. it’s only going to improve.

as the environment changes, so must we change. parents and teachers both have been static for far too long.

for the 10% of people who are passionately curious, they are going to learn faster and faster every day.

for the rest, it’s not like they were being well educated before. life will go on.


Is there a way to orient assignments that can't be solved easily by ai?

Maybe classes will have to have more non-computer exams to test pupils ability to do tasks like solve mathematic questions without ai.


Whiteboard interviews as lessons.

2010: Learn coding at college so you can pass an interview. 2030: Learn how to pass an interview so you can do coding at college.


As a student (in the academic sense when younger, and in a philosophical sense throughout life), I've been taught by teachers, and recently I've explored subjects by asking questions to large language models

The latter has been a much more pleasant experience. Infinite patience, infinite availability, seemingly infinite personalization. I'm also able to ask analogous questions to concepts from other fields.

I would be hesitant to see AI adopted during the early years of education, however.


This could be a good starting point for self-studying. I would not totally rely on it being correct, though.


I dread to think how much bad information it gives you whilst both you and it are incapable of realising it.

Ask it about niche topics, hell, ask it about NixOS configs. Hallucinating all kinds of functions that don't exist with absolute confidence, generating configs that mix programming languages.

It can't even get the basics right in most programming languages of whether to use hashes or slashes for comments in its output. It's complete shite.


IDK but AI seems to be the lesser problem for me, social media is focking the generation badly, AI? they could care less


I wrote a small novel but lost it due to user error, so here's a hot take with much less nuance from a college math professor. Just like calculators meant we needed and could pivot the skills we ask our students to learn, AI will also shift the paradigm. But unlike calculators, which are very good at what they're meant to do, AI produces BS in the academic sense [0] and if used uncritically frequently produces work that's just nonsense. So I tell students I don't care so much if they use it, but that the students who do tend to end up looking foolish.

[0]: https://link.springer.com/article/10.1007/s10676-024-09775-5


As above, so below. If you think the kids are incompetent and entitled, look at the adults.


All resistance to the use of computers running "AI" must be suppressed. Good luck with that. The desperate, nonsensical retorts in these "AI" threads suggest that "AI" fans just cannot accept that some people may not want to use it or may not like correcting the "work" of people who do.


AI is making children even dumber.

The death of reading culture, the rise of video games, then smartphones, then social media and doomscrolling, and the lack of tactile physical contact with the environment, and decline in socialization had already made kids much dumber.

Case in point, someone will retort by pointing to IQ scores, or how school knowledge doesn't matter because we can always Google it (or, in 2024, ask AI). That person would be the product of earlier, pre-AI, dumber-ing processes.


People now read more than anytime in history. In multiple language even. Maybe that changed in the last few years, but millenials read way, way more than baby boomers, and buy more books (and when you compare % of wealth dedicated to books, it's not even a valid comparison, and only GenZ is close).

Also i've talked to my librarian recently about the expansion of the political theory aisle, he said that why my generation (Millennial) lead the reduction in adventure novel and the expention of fantasy and anticipation, GenZ read a lot of really "boring" books, about history and political theory. Maybe it's only my country, but that makes me feel less alone and quite optimist about the future.


>People now read more than anytime in history

The consume more tripe (including written tripe) on the internet, not more books (physical or of the e-kind), even when they do buy them it's mostly self-help and other low-brow stuff. Even professors and supposedly educated people read less and less books, and can concentrate less on what they read.

Not sure if Gen Z is making a u-turn on this - I doubt it, but am open to be proven wrong (and also, they do it at the library? Maybe a small minority?)


Sorry, it's a mistranslation, I translated 'libraire' by librarian. I meant bookstore teller (I think). And yes, the average age at the bookstore actually came down a lot imho.


> The consume more tripe (including written tripe) on the internet, not more books (physical or of the e-kind), even when they do buy them it's mostly self-help and other low-brow stuff.

People consuming tripe is nothing new. Nor is buying self-help books.


No one put you in charge of policing the reading tastes of other people.

Just sayin'.

> Even professors and supposedly educated people read less and less books

"Fewer", not "less". "Book" is a countable noun, not a mass noun.


https://dictionary.cambridge.org/pl/dictionary/english/less-...

What's your opinion on Cambridge's entry of "less and less"? More interestingly, what's the purpose of grammar policiing? If enough people use an expression, it becomes language. It's weird how people claim some type of superiority just because they subscribe to a certain phrase.


Just giving the previous poster a taste of his own medicine.

> What's your opinion on Cambridge's entry of "less and less"?

None of those examples are countable nouns. Books are discrete and countable. Time (e.g.) is continuous.


My reading age was off the chart (beyond 16) at the age of 5 thanks to the combination of voice acting and dialogue boxes in '90s games—prior to that, I struggled. Edutainment value can definitely be a thing. I got some certificate for it and someone outside the school came to validate that it was true as it was so shocking. Was reading "at a university level" and devoured books.


It's also a common symptom with high functioning asd, known as "hyperlexia".


Wow! Didn't know that! Guilty as charged in terms of ASD, although it's a recent revelation and went unnoticed versus my ADHD as a child.

Would have been great to know that back then, would have made my life much easier going forward! Instead, the "achievement" got celebrated by the school, which is now quite laughable, then really struggled once I got to a school of 2,500 people instead of ~30.

Some of these "what happens to 'gifted' children" horror stories of overlooked ASD really resonate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: