Hacker News new | past | comments | ask | show | jobs | submit | djcooley's comments login

This is terrifying. Our ability to think has been our biggest differentiator as a species. LLMs threaten this, and I'll die on that hill.

I’ll share my experience and the experience of my kids so far.

Aside from blindly copying and pasting a response, in which case the learner wasn’t interested in learning and probably would have plagiarized from somewhere else anyway, I have found LLM to be an incredible, endlessly patient teacher that I’m never afraid to ask a question of.

My kids who are in the tween and teenage years, are incredibly skeptical and dismissive of AI. They regard AI art as taking away creative initiative from artists and treat LLM similar to the way we treated Google growing up, if they use them at all. It’s a tool which can be helpful for answering questions that is part of the landscape of their knowledge building.

That knowledge acquisition includes school, YouTube and other short videos, their peers (online and off) Internet searches, and asking AI. Generally, I regard asking AI as one of the least problematic sources of info in that environment.

While I tend to be optimistic as a default, I truly do think that the ability to become less ignorant by asking questions is a net positive for humanity.

The only thing I truly lean on AI for right now is as an editor, helping me turn my detailed bullet points into decently crafted prose, and for generating clear and concise transcripts and takeaways from long meetings. To me that doesn’t seem like the downfall of human knowledge.


> [Writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

- Socrates (written down by Plato)


Thank you. Luddite tendencies are as deep within humans as the desire to kill is. It is a personification of the death-drive and all self-destructive tendencies within humans.

It's also the tendency towards the "precautionary principal" AKA Nietzschian "last-man" style thinking applied to the world infinitely.

We should root this kind of thinking out aggressively, at least from the academy.


I think this is overestimating the impact of LLMs.

Fact is, even if they are capable of fully replicating and even replacing actual human thought, at best they regurgitate what has come before. They are, effectively, a tutor (as another commentator pointed out).

A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

I personally still don't see the actual value of LLMs being realized vs their cost to build anytime soon. I'll be shocked if any of this AI investment pays off beyond some minor curiosities - in ten years we're going to look back at this period in the same way we look at cryptocurrency now - a waste of resources.


> A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

What changes is the educational history of those humans. It's like how the world is getting obese. On average, we have areas we empirically don't choose our own long term over our short term. Apparently homework is one of those things, according to teachers like in TFA. Instead of doing their own homework, they're having their "tutor" do their homework.

Hopefully the impact of this will be like the impact of calculators, but I also fear that the impact will be like having tutors do your homework and take your tests until you hit a certain grade and suddenly the tools you're reliant on don't work, but you don't have practice doing things any other way.


I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.

I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.

Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).


>at best they regurgitate what has come before

I keep seeing this repeated, but it seems people either take it as being self evident or have a false assumption about how transformers work.


What's your opinion on calculators?

Update: I meant to compare calculators to something like a slide ruler for logarithms. I'm not from US and I tend to forget that some people use calculators to take 20% of 500.


Where I live these are not allowed in the classroom until 7th grade or so, i.e. when the kids have learned the skills and can then employ calculators mindfully.

This seems reasonable. When I was in school we started using calculators and other technology in 9th grade. That was in 1980 though.

I will occasionally do long multiplication in my minds eye just to make sure I can lol. Anything more complicated than that, most people will not be doing anyway. University students however, almost universally do need to write sometimes. Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time. It's pretty standard that higher education requires a level of intellectual rigour that is totally unecessary for day to day life. In the case of ChatGPT, it's allowing people to completely bypass that process even in those settings. Meaning you NEVER learn to do it, not just that you don't do it day to day.

>Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time

I got an engineering degree and don't remember ever being required to do math without a calculator. Of course, some things are easier if you don't need to bust out a calculator for everything.


Does calculating numbers based on concrete rules require "thinking" in the same way OP talks about? I think not.

False equivalence.

With a calculator, the end result is still the same: a (typically numerical) answer of some kind. Writing one's own essay vs. getting an LLM to regurgitate it results in vastly different outcomes.


You don't allow students to use calculators for operations they haven't personally mastered. If you don't learn how to add two numbers on your own, the rest of your learning is in serious jeopardy.

This is the author's lament. These students are skipping over personal mastery.


Not a super big fan, honestly. I'm a bit horrified when I see high school seniors who are smart, and have been through the entire HS math sequence... dig around in their backpack for a calculator to find 5 times 1.5 or 20% of 11.

I'm glad that we have calculators and computing devices, but I'm not glad that they have made teens with basic numeracy into an endangered species. Many tools we use expand our understanding, but the calculator causes our arithmetic skills to atrophy.


From my experience, the more advanced math you learn, the worse you become at arithmetic. I knew a lot of math majors in college, and all of them used calculators all the time.

Yes, I've gotten worse at arithmetic, too.

The point is, one is hard pressed to find anyone who can do much arithmetic-- even trivial things.


Depends on how far in basic arithmetic they get.

But, if symbolic manipulation is done by hand and then numbers just plopped in to get final result and estimation if answer is realistic enough. Well I think that is fair enough.

And spreadsheets are also useful when you need to add up bunch of things or multiply them.


A more apt comparison might be asking the abacus what it thinks of the calculator.

Calculators are reliable and predictable, so losing skill at that kind of calculation is a safe, compartmentalized offloading. We offload an extremely clearly defined set of tasks, and it gets executed cheaply, immediately, and perfectly.

LLMs are different.

A closer analogy would be something like computer algebra systems, especially integration. We can offload differentiation cheaply, immediately, and perfectly, but integration will frequently have a "unable to evaluate" result. I genuinely wonder whether integral-requiring-workers are better or worse at it as a result of growing up with CAS tools. People on the periphery (a biologist, for example) are undoubtably better off since they get answers they couldn't get before, but people on the interior (maybe a physicist) might be worse at some things they wish they could do better, relative to those who came up without those tools.


There are chip startups trying to address this exact issue. For example, Cornami: https://cornami.com.

There is also a lot of research into lowering the power/compute penalty of FHE. See ISSCC 2023: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1006752....


From my experience on this, the single biggest determining factor on one's opinion is whether you have kids (I have elementary-age kids). We have to work like mad to limit screens, but it is an uphill battle. Their schools use them, their music teachers use them, their after-school tutoring (Kumon) uses them, and so on.

I have a vivid memory of my son's 6th birthday. At his school, kids would sit in a circle and get to ask questions to the birthday kid. One of the questions he got was: "Who is your favorite Youtuber?" His heart sank when he said he wasn't allowed to watch it, and the other kids were kind of shocked.


I second that. 3b1b is fantastic for developing a more intuitive understanding of mathematics. In my experience, having that intuition makes all the difference.


Mathematics teaching should start with this and end with the actual computation. In high school I thought algebraic calculations was all there is to math.


Math should really about understanding concepts and developing intuition, not about equations. The equations and formalisms are there to precisely codify our intuition and ensure (through proof) that our ideas are in fact correct, but they are less important than the intuition. It’s a shame that so much of our math education is focused on the former as opposed to the latter.


Did you ride the Great Allegheny Passage?


Chipset developers like Silicon Labs* are developing very advanced but approachable security capabilities into their latest products (secure boot, secure debug, physical protection (DPA countermeasure, anti-tamper), key management, key storage, crypto engine, etc.)*.

The tools are there now to address this, and this should go a long way toward actually securing the application, the data, the IP, and overall simplify lifecycle management.

* - disclaimer, I am an employee * - https://www.silabs.com/security


Unfortunately I've often found these capabilities end up being used against users as much as, if not vastly more than, they are used in their favour.

For example, secure boot and anti-tamper measures are often used to lock out users from being able to examine or modify equipment and software for their own benefit. Sure, these measures can be argued as ways to "protect" the user from themselves (preventing inadvertent/unsupported changes of hardware causing malfunction, or preventing the installation of malware, and so on), but to rob the users of their agency to decide what's best for themselves in these circumstances is fundamentally disrespectful.

Nonetheless, I hope your employer is in a position to be part of a movement to buck the trend here, but based on what I've seen in the industry over the years, I've learned to be very skeptical whenever I hear of such "security" capabilities being thrown around as universally beneficial for everyone.


The issue here isn't hardware capabilities, it's that vendors like to make their gadgets centrally connected for convenience and analytics and then on top often don't care about hygiene (e.g. no crypto at all).


Would it only allow for the lamp to be "secure" in the sense that the owner would not be able to take back control anymore? If that's the case, that's a "solution" worse than the problem, that's even unethical as hell given this will short/medium term accelerate the ecological nightmare.


I don't care how "secure" one can make an internet-connected lamp. I don't want or need a lamp to connect to the internet to change its operating conditions. The problem is that we, as a society, are being so suckered by cheap consumer devices that it's becoming difficult to even FIND NON-connected devices in some categories. Like the lamp in the article, I'm willing to bet that he looked for something with purely physical controls, and couldn't find one in a comparable price point. I honestly don't get it. I can't fathom what some company could possibly be doing with my usage data from some internet-connected LAMP, or why they would go about designing all the infrastructure to make it work. It would be orders of magnitude more easy to just put some buttons on the side of the unit. At this point, I guess someone out there thinks, "Oh, neat!" but this sort of situation is paving the way for it to be impossible to buy ANY consumer electronic device that doesn't phone home in the very near future.


This reminds me of a great film on crafting quartz crystals for use in radios: https://youtu.be/wHenisSTUQY


+1, it's an amazing video. Although I'm a daily user of crystal oscillators (right now on my desk there are 6 circuit boards of my own designs, powered by 12 crystal oscillators), previously I just couldn't imagine how they were made by hand without laser cutting and trimming back in the day, and someone directed me to this video, everything is well explained.


I am from Texas and currently live in Austin, but I also had the opportunity to live in many places in the world (Bay Area, China, Singapore, and Norway) spread out over a decade.

As much as I have admired and enjoyed many of those places, I always knew I would come back to Austin. The quality of life you get here, all things considered, simply cannot be beat.

After I graduated from school in the Bay Area (circa 2005), I recommended to many friends in high-tech to move to Austin. Most said no, for many reasons. Fast forward to today, and many of those individuals have made the move or are thinking to do so at this point. It is for the same reasons: cost of living, raising a family, space to build, etc. I welcome their arrival and hope they take the time to learn about the city and what makes it special.


I really do love visiting Austin but... Ouch.

https://www.kvue.com/article/weather/austin-tied-for-the-thi...

"As of Thursday, Austin has officially had its 19th day of triple-digit temperatures. This means we have tied the third-longest stretch of 100-plus degree days recorded at Camp Mabry. Not to mention, every single day this month has been above average. Our average temperature at this time of the year should be 97 degrees."


Yup, it can get hot in the summer.

On the flip side, winters are great! The best time of year is Mar/Apr and Oct/Nov. Absolutely gorgeous. There's a reason why SXSW and ACL are scheduled at those times...


I lived in Shenzhen from 2007-2011 and can confirm that large amounts of recyclables were shipped to China. Open-air facilities hired pickers to go through the material and pull out whatever they could before disposing of the remainder (by who knows what means).

This all happened within eyesight of my condo building.


Couldn't the remaining waste be efficiently used to generate energy in a waste incinerator? They are still high-energy oil products after all, and no longer mixed in with all kinds of wet biomass.


Exactly what I've been saying for years. Public perception of incinerators is pretty poor though.


There are a couple European countries that burn all their waste. Sweden has been _importing_ waste to burn for a while. Switzerland is now retrieving various kinds of metals from the ashes. Ireland has installed an incinerator at a landmark position in the focal point of all the Dublin's shoreline.


They produce a lot of toxic gases as the plastics often contain a variety of elements which when burned formed nasty compounds


Filtering that out is a solved engineering problem and you still remain energy positive.


Clean coal, clean diesel, now clean incinerated plastic: I don't doubt that it's manageable or maybe even fully solved, I just don't have much faith in it being employed. The track record of such things is not great.


The local incinerator here in Switzerland isn't merely burning the waste. IIRC it gets vapourised using a high energy plasma or something like that, broken down to constituent atoms. At any rate it belches smoke into the atmosphere in the middle of a city, has done for a while and nobody seems to suffer any ill effects, so I believe them that the "burning" is highly effective.


That sounds like plasma gasification (https://en.wikipedia.org/wiki/Plasma_gasification). Out of curiosity, which city are you in?

> it belches smoke into the atmosphere in the middle of a city, has done for a while and nobody seems to suffer any ill effects

How long is a while, and how does that compare to the time it would take to detect meaningful increases in cancer or other illnesses in the vicinity of the incinerator?


http://fernwaerme-zuerich.ch/index_en.htm

Was built in 1995, or so it claims, with other kinds of incineration being done for >100 years.

So 25 years or so. I think that's enough time for a clear effect to show up.


Japan does this.


They have a generally good reputation here in Sweden.


Thank you, I purchased the book!


There’s an Orielly book too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: