Hacker News new | past | comments | ask | show | jobs | submit login
We can save what matters about writing (tedunderwood.com)
39 points by mjn 10 months ago | hide | past | favorite | 42 comments



Random thought that just occurred to me: something I always wanted as a student but which is likely impractical because of the time it takes from the tutor, is to have an exam in the form of a conversation, where you can elaborate and explain and ask follow-up questions when you’re not quite sure, etc. This actually seems like something an LLM chat would be perfect for? Let the student talk to it directly and have it ask about whatever the course is about, and then have the teacher evaluate their ability to reason and explain various concepts.


In physics in college we were grilled in verbal discussions with professors and an assistant for ~30 minutes or until the professor had found the limits of our knowledge, one I had went on for 50 mins.

The entire grade of the diploma was composed of all the verbal tests and the grade of our thesis (pre B.Sc.) - so there had to be a point where you knew virtually everything well enough to explain it at least in one way and derive some relations or sketch a derivation if prompted to.

The flaw here was the personality of professors - some were easy to piss off, others more patient.


Isn't this similar to doctoral defences?


Part of our tests for projects back in college (in the Netherlands) was a verbal part, part presentation, part Q&A; it was mainly to test individuals in a group project to check whether they actually contributed to the project. That seems alright to me.


It's called an oral exam. They are actually brutal.


Quite common in some places, e.g. France


Sounds like a job interview


My feeling is that saving writing from AI is like saving thinking from the pen.

If we were to live in a society where building a cohesive thought has to be done completely in one's head and communicating it needs to be done via talking, the pen might seem like a crutch that will make people lose those skills.

---

By the way, I asked GPT-3.5 what it would change in my text above, and it says:

    Change "seem" to "appear" The word "seem"
    is more informal and less precise than "appear."
    By using the word "appear" instead, the sentence
    sounds more formal and adds more clarity to the
    writer's intent.
I'm not sure I agree. What does everybody think?


I think replacing “seem like” with “appear to be” would make the sentence worse. “Seem like” gives a more abstract feel to the comparison, which is well-suited to the sentence given that “the pen” does not refer to an individual pen and “a crutch” does not refer to literal crutches. “Appear to be” is also just more wordy.

But “has”, “needs”, and “will” should probably be “had”, “needed”, and “would”, and then I would replace “were to live” with “lived”.


Interesting that it gives the (bad) subjective advice, but misses the clear cut grammatical error.


> My feeling is that saving writing from AI is like saving thinking from the pen.

> If we were to live in a society where building a cohesive thought has to be done completely in one's head and communicating it needs to be done via talking, the pen might seem like a crutch that will make people lose those skills.

In Plato's Phaedrus, Socrates made just about that exact point: that writing was not really a means to remember, but merely a reminder of what must be remembered, and that it cannot convey wisdom because it cannot be engaged in dialogue.

Socrates might've been a bit disappointed to hear of recent studies which suggest that the act of writing something down does help you remember.

I suspect the same is true of computers in general -- the technology of today that many thinkers like to grouse about for fear it is ruining future generations' minds. And to be sure there are Skinner boxes that run on computers like social media and Candy Crush that serve little purpose other than to sap our attention for advertising dollars. But... the act of programming a computer can be as mind-strengthening as writing, perhaps even more so. To program, say, a flight simulator, you have to know enough aerodynamics, deep down, to express it in a concrete, specific, and succinct way to a computer -- so if you want to strengthen your understanding of aerodynamics, writing a program to simulate it might be a good exercise. (I think this form of education was what Alan Kay was trying to get at with Smalltalk and the Dynabook project.)

Hmm... might programming and prompting an AI model both count as "forms of writing you can engage in dialogue with"? I think Socrates might sniff at them as weak forms of dialogue compared to sitting around talking with philosophers, but they're more active and dynamic than reading fixed, printed text and therefore may well present new avenues of education.


    programming a computer can be as
    mind-strengthening as writing
When computers become more intelligent than humans, a strong mind (able to think stuff through) might not be of importance anymore. Just like being able to run fast is not important anymore in the times of cars and boats and planes.


>Socrates might've been a bit disappointed to hear of recent studies which suggest that the act of writing something down does help you remember.

Socratic reasoning is not based on studies to determine what is true, but rather logical argumentation, so I suppose he would find the use of studies unuseful.


Using AI to write is more like peeking at your neighbor's output during a school exam.

Or rather, since the AI also stole from others, peeking at the neighbor's output, who in turn stole it from his neighbor.


One difference is that this neighbor will always be there for you.

Another is that you cannot discuss stuff with your neighbor during an exam. But you can discuss anything as much as you like with AI.


> One difference is that this neighbor will always be there for you.

Someone you're copying from during an exam is there for the whole duration of the exam. Neither are there "for you" though, they're just there.

> But you can discuss anything as much as you like with AI.

No, because a statistical model isn't thought. You can get something out of it, just like you can get something out of watching a movie that was made without the knowledge of your existence, but that something is not conversation.


"you need to learn how to do arithmetics. You won't always have a calculator in your packet!"

I guess next generation will always have their personal AI with them.


Definitely thinks "seem" better fits imo. Oddly enough earlier today I weighed the difference between the two before sending a text message to a coworker.

Appear would fit better if the subject was visual.


This may be more a thing for learners of English as a foreign language, but in the institution where I learned the language at the C level, we were encouraged to buy, read, and apply foreign language style guides, e.g. the one from the Economist [1].

The text you copied reads a lot like something you would find in such a style guide.

[1] http://cdn.static-economist.com/sites/default/files/pdfs/sty...


'seem like' would swap well with 'appear to be', but frankly I'm not following your similies.


The similarity between the pen and AI is that both are tools that help mankind build better works of thought.

Technology progresses and mankind becomes more effective all the time.

At every stage, some people argue that the loss of manual ability outweights the gained technological ability.

I remember people arguing against computer based navigation because people would lose their ability to navigate by map.


People largely did lose the ability to navigate by map. It hasn't mattered mostly because people have their computers with them often enough to not get lost, but I can tell you from having been in the Army it is a bitch teaching people to navigate without any electronics.

A pen is a means to write down and not forget the thoughts you're having, enabling long form information storage and transmission. It doesn't formulate the thoughts for you. You might be thinking more of a ghostwriter. Rich kids have always been able to pay nerds to do their homework for them. Now that technology is available to the masses and we no longer need nerds.

ChatGPT gave you a useless suggestion to justify its own salary, in my opinion. Seem to appear is a pointless, cosmetic edit that doesn't change the meaning of what you wrote in any way.


seem is better.

appear has more of a visual meaning, whereas seem is more abstracted.


I have yet to see a LLM generated piece of writing or art that I like as art or that evokes anything.

Art is about communicating human emotion. I feel like I can tell when there’s nothing really on the other end. I don’t think AI is a threat to real capital-A Art. I think it is a threat to any form of content milling since it definitely churns out filler much faster and cheaper and better than humans. It’s also probably a threat to pseudointellectual sophistry since it can do that very well.

I wonder if the appearance of genuinely compelling art coming from an autonomous AI might be the most compelling sign of sentience?

If so then William Gibson nailed it again. That’s what the final superintelligent AI at the end of Mona Lisa Overdrive (final book in that trilogy) is doing.


I disagree that its not capable of producing worthwhile creative writing, I've had great fun getting it to generate good art, usually by giving it writing tips. Things like: "show, don't tell. Give characters (but only some, not all) internal motivations they try to hide from others, and indirectly, the reader. create interesting worldbuilding, hinting at complexities and structures without fully explaining them. "

to tell the truth, there is a creativity and and art required to prompt it to produce interesting results. be thankful of this fact. There is also a fun perhaps inversion of values for me when you start using it this way (I find I value interesting story aspects the AI came up with more than ones I told it to use. I guess because it's not always that great at it so its a plesant surprise. A time it out of nowhere decided to set an unrelated story I asked for in a religious dictatorship with strict and specific rules I couldn't trace to any religion I knew of was particularly delightful, where it might be merely a "meh" framing device if I knew it were written by a human)

Another great prompt I found gives great results was asking gpt4 to use a "staccato writing style". I've had some really creative writing styles come out of things like this, sometimes I go into a little more detail like "use only short sentences punctuated with particularly resonant words that make use of hard consonants" but often leaving it to its own devices when you use more "flowery" descriptions of writing styles can work quite well.

My favorite was one I asked gpt4 some months ago to write about a kafkaesque story set in a world where animals held all real power and were generally dismissive of humans' concerns.

I won't post the whole things, thats assanine, but it did a great job with worldbuilding, writing style, characters, keeping it artsy and mysterious etc


If you're working really hard to prompt the AI into generating a piece of compelling art, I'd argue that you're the artist and are using it as a tool. The key is that you're putting in effort and it's driven by an artist's intent. The artist is not being replaced.

This is more like photography. A camera is just a dumb box that records images, but in the hands of an artist it can be used as a tool to make art.

People were really afraid cameras would replace artists too, but they didn't. They replaced portraiture which was when painters would try to render photorealistic portraits of people and things, but they didn't replace painting as art and in fact opened new avenues to art.


I think "pseudointellectual sophistry" is just a subcategory of "filler"/"content milling", because there's nothing real there.


What we're seeing now is a bit of an arms race between cheaters and teachers. Why don't teachers "cheat" a bit themselves?

Namely, why don't universities extend the statue of limitations on academic misconduct to several years, or even decades? Collect all student assignments in a database and routinely re-run them through the most up-to-date cheater-catching software available. Anyone who gets flagged, even years after graduation, gets their degree revoked if they don't make an adequate defence to the university.

If this is done, students don't just have to fool teachers and their tools at the time they take the course, but all tools developed for years afterwards. Having cheated would become like having a sword of damocles hanging over your head. If you could pour years of tuition and "hard work" into a degree and establish yourself in a profession only to have it all ripped away years later because you cheated, how many would still cheat?

With methods like this, the take-home essay could remain a part of education.

If this seems too objectionable, there are still labour intensive ways to work around the problem. e.g. Oral exams. You can assign take-home essays and then have the prof or TA's interview students about the essay. If they can't answer some basic questions about their own essay and some of the sources it uses, that's a fail.


In a sense, that is what happened in Germany in the last 20 years or so, when people looked into the doctoral titles of politicians.

Turns out: it did not change anything. Even in cases where the universities revoked the titles, politicians were hardly ever affected at all. Some became our equivalent to governors weeks after their cheating was unveiled.


I wouldn't say it changed nothing. Take the Guttenberg incidence for example. But more often it doesn't lead to actual repercusions, I agree on that with you.


>Collect all student assignments in a database and routinely re-run them through the most up-to-date cheater-catching software available. Anyone who gets flagged, even years after graduation, gets their degree revoked if they don't make an adequate defence to the university.

So if some new overhyped plagiarism detection system spuriously decides that I cheated years ago and I've long since thrown away any notes that might show otherwise, I lose my degree? I don't see how that's a good idea.


Once you have a number of years of actual job experience, do people still care whether you have a degree? (And do people really even verify somehow whether you have that degree in the first place?)


It may not matter to some, but it will to others. Can cheaters rely on finding employers who don't care? The deterrent still works.


Unfortunately, cheat detecting systems create false positives sometimes. I would not like to increase the odds of getting falsely flagged for cheating when run against multiple cheat-detecting systems when you no longer have a relationship with the professor and he doesn't remember you.


it’s just showing your work with more fancy descriptions.

if you have a beautiful term paper that emerged from the ether, it’s likely synthetic.

if you have notes and outlines and all the trappings of having written a term paper, then at the very least you understand all the components and the process, even if you managed to use a language model for every step.

that’s the point, the box is open. but it can be used rather than abused. things change. how we learn is changing.


Edit: This turned into a rant.

"Teach the process"

Learn to learn

etc

The approach the author takes is boring and therefore will not succeed. Maybe future will prove me wrong on this - I'd be fine with it.

My take is an axiom that is omnipresent among tech bubble: Focus on what matters. Let's ask ourself every time we put more than one thought into something: "What is this really about?" "Why do I actually have to do this?" etc. Even without AI our academic systems are being gamified every day. AI just accelerates (and partly even democratize) this movement. People follow a fassade and sadly the system followed. We need to start asking general questions more again. With AI and art this is even more visible. It is often overlooked that AI can not create something new. And you do not have the right to make a living by rearranging and combinging already existing ideas. But exactly this is the metric we should focus on: the new. If you don't want to create something new, then fine, just copy and paste along. But making something new is hard and you'll need to learn the basics yourself (tech the process, learn to learn, etc). But to make this approach imposed is even another misguided junction. It has to come from the one thing that is important: the new, the actual drive, the 'doing it because I want it'


Everytime people came up with a new curriculum or a new way to teach it seems to always backfire. Here's a thought, dont do anything. Let education be the farce that it is, save for some people who really believe in it, which are the only benefactors since the beginning anyway.


The shift from print to digital mediums has democratized the act of writing, allowing more voices to be heard and expanding access to information.


We're only about a year into the post chat gpt world. We've seen nothing yet.

Asking humans to do things manually that a computer can do effortlessly and flawlessly is going to be relatively low value. I remember going to school and having to learn to work out on paper how to multiply and divide numbers. The home work was super tedious and I was bored with it. And right in front of me was the solution: a calculator. This was the early nineteen eighties. I'm sure I could work out from first principles how that stuff works but I haven't multiplied or divided large numbers manually ever since I was allowed to actually use a calculator in high school about 35 years ago. Why would I? It's tedious, error prone, slow, and I have multiple tools within reach that can do this for me.

The adoption of AI is the same thing. The generation after us will not know a world where AI wasn't a thing and will use it without even thinking about it. Effortlessly, skillfully, and effectively. And they'll be better off, mostly. Us old timers rambling about the lost art of writing (and a lot of other skills) are basically just like any generation that got confronted with the world changing around them, no longer relevant. Besides, if you've ever had the pleasure of having to grade a bunch of student essays, you'd realize that most of them are not great writers to begin with. Not having to read a lot of badly written garbage is actually an improvement. And being able to ask an AI to extract the essentials of any human produced drivel is probably a great productivity enhancer as well.

In the same way, I don't need some second rate lawyer messing up legal work and overcharging me for the privilege when I can have an AI do it right. AIs are passing bar exams now so that doesn't seem like an unreasonable expectation. Once it stops hallucinating, we can get rid of the remaining lawyers. I run a startup and I deal with lots of weird and hairy legal issues related to HR, taxes & profit, NDAs, sales contracts, etc. Mostly that still requires getting legal advice from skilled individuals. Our startup is in Germany and I'm not a native speaker. This shit is super complicated and exactly the kind of stuff where you can get a lot of value out of AIs. We still cross check these things with actual lawyers. But it saves a lot of time, effort, and billable hours if you can get a grip on these things before you talk to them.

Artisanally figuring it out all by yourself unassisted by modern tools might be interesting to some as an artistic expression, or for esthetic/sentimental reasons, or whatever. But when the end result is mediocre and sub par, it won't have a lot of value to others. It's like a toddler drawing something. It's cute and generally greatly appreciated by parents/grand parents/etc.. And just because we now have mid journey, doesn't mean the toddler's drawing has less value for them. But let's face it, most toddlers don't turn into great artists.


The irony may be that removing the barrier to legal advice in that way will at the same time make the implementation of your startup a triviality.


What do you mean with 'implementation of your startup'?


I feel any method that is powerful enough to dependably act as an attourney is powerful enough to found its own company, write software, acquire clients, subcontract etc.

With the expression I meant the realization of a business idea into a self-sustaining business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: