Hacker News new | past | comments | ask | show | jobs | submit login

> Such as? We need an answer now because students are being assessed now.

My current best guess, is to hand the student stuff that was written by an LLM, and challenge them to find and correct its mistakes.

That's going to be what they do in their careers, unless the LLMs get so good they don't need to, in which case https://xkcd.com/810/ applies.

> Personally I think all this is unpredictable and destabilizing. If the AI advocates are right, which I don't think they are, they're going to eradicate most of the white collar jobs and academic specialties for which those people are being trained and evaluated.

Yup.

I hope the e/acc types are wrong, we're not ready.






> My current best guess, is to hand the student stuff that was written by an LLM, and challenge them to find and correct its mistakes.

Finding errors in a text is a useful exercise, but clearly a huge step down in terms of cognitive challenge from producing a high quality text from scratch. This isn't so much an alternative as it is just giving up on giving students intellectually challenging work.

> That's going to be what they do in their careers

I think this objection is not relevant. Calculators made pen-and-paper arithmetic on large numbers obsolete, but it turns out that the skills you build as a child doing pen-and-paper arithmetic are useful once you move on to more complex mathematics (that is, you learn the skill of executing a procedure on abstract symbols). Pen-and-paper arithmetic may be obsolete as a tool, but learning it is still useful. It's not easy to identify which "useless" skills are still useful as to learn as cognitive training, but I feel pretty confident that writing is one of them.


> Finding errors in a text is a useful exercise, but clearly a huge step down in terms of cognitive challenge from producing a high quality text from scratch.

I disagree.

I've been writing a novel now for… far too long, now. Trouble is, whenever I read it back, I don't like what I've done.

I could totally just ask an LLM to write one for me, but the hard part is figuring out what parts of those 109,000 words of mine sucked, much more so than writing them.

(I can also ask an LLM to copyedit for me, but that only goes so far before it gets confused and starts trying to tell me about something wildly different).

> It's not easy to identify which "useless" skills are still useful as to learn as cognitive training

Indeed. And you may also be correct that writing is one such skill even if only just to get the most out of an LLM.

What I'm describing here is very much a best guess from minimal evidence and the current situation; I would easily drop it for another idea if I saw even very minimal evidence for a better solution.


> e/acc types

Please expand?


Effective Acceleration, the promotion of rapid AI development and roll out, appealing to all the deaths and suffering that can be prevented if we have the Singularity a year early.

Extremely optimistic about the benefits of new tech, downplay all the risks, my experience of self-identifying e/acc people has generally been that they assume AI alignment will happen by default or be solved in the marketplace… and specifically where I hope they're wrong, is that many seem to think this is all imminent, as in 3-5 years.

If they're right about everything else then we're all going to have a great time regardless of when it comes, but I don't see human nature being compatible with even just an LLM that can do a genuinely novel PhD's worth of research rather than "merely" explain it or assist with it (impressive though even those much easier targets are).


TYVM. Hopefully the inability to see ways this could go wrong or really look at the problem is sufficiently correlated with the lack of the tools required for progress.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: