Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT in Academic Writing (monsterwriter.app)
62 points by WolfOliver on March 5, 2023 | hide | past | favorite | 92 comments



My wife is a university professor in the states, she said it's incredibly obvious who has started to use a GPT for the work they turn in. There is a lot of wrestling going on within the staff in her department on what they should do, some people want to outright ban anything that "seems" it was written by a LLM. As my wife discussed with me, it's hard to know how much was written by GPT and how much was edited by the student, maybe they asked 50 questions to the LLM and then copy/pasted it together themselves, maybe they used it for the outline, and filled in the details themselves. The lines can get pretty blurry.

One thing that it seems most of the profs find annoying is... they're aware that GPT can invent convincing sounding sources, so they're doing a lot more work verifying sources.

Commented this ^^ the other day on the UK students being able to use LLMs post: https://news.ycombinator.com/item?id=35028224


I think your wife's collogues are missing the forest for the trees here. This isn't going away. You can't stop it. Her university can't win an AI arms race, nor should they even try to. I've already integrated ChatGPT into my workflow and not only is my boss thrilled about it, he's actively embracing it and encouraging the rest of the people at work to do it.

We're discussing approaching OpenAI about enterprise pricing so everyone can have the Plus version.

I'm able to take time doing the bullshit grunt work of my job and essentially outsource it to ChatGPT. Then I go in and fix everything it fucks up - not just code, but poor wording, incoherent / incongruent sentences, etc.

University professors, high school teachers, middle school teachers - everyone - are all in for a rude awakening. You're going to have to get to know your students. You're going to have to scale down class sizes, because you have to now invest time in learning someone's writing style to know what's ChatGPT and what's their own voice. You're going to have to actually test for understanding, not just rote memorization. Our species is moving past that, thank God.

And more importantly, we're going to see college scaled back like it should have already been. Most people are not only not college material, they don't need to be college material. The vast, vast majority of jobs in the world can be easily done without a four-year degree and with a few weeks to a few months on on-the-job training.

We're long overdue to pass the buck back to companies to pay for training their workers, not dumping a $25,000 to $250,000 responsibility on would-be workers in the hopes they can stand out. ChatGPT will hopefully be a huge catalyst for this, and I think paired with crippling student loans, we will start to see a massive paradigm shift soon.


I agree with you that it's here to stay, and I'm sure my wife does too... but it's not so simple.

I can mostly figure out how to improve my work with modern ML, and I'm fairly tech literate... I think it's just a painful time for them. It's easy to say they're in for a rude awakening because it's obvious to us they are. However, she's a history professor who can hardly use a computer. Between dealing with bureaucracy of an inherently protectionist work environment, reading a zillion books, research, teaching, conferences, publishing, etc, etc, plus never really focusing on becoming technology literate, this is a lot to process.

My wife is still young relatively to her colleagues so I think she can deal with the shift, but it doesn't negate the fact that she's (a junior) member of a team, and this conversation is happening in all departments, from architecture to philosophy to math. It's going to be interesting to see what happens.


Although my first post may not sound like it, I am in fact quite sympathetic to your wife and her situation (although I think she'll be fine is she adopts what I'm about to post).

> However, she's a history professor who can hardly use a computer. Between dealing with bureaucracy of an inherently protectionist work environment, reading a zillion books, research, teaching, conferences, publishing, etc, etc, plus never really focusing on becoming technology literate, this is a lot to process.

This is no longer an option for our species.

We've managed to dumb down computers to a level that people can't even figure out what operating system they're using (Windows 10/11 or macOS).

That's a problem. I don't care what anyone says. They're all objectively wrong. You *need* to have some idea of how the technology that shapes your life works. Just like you don't need to be able to tear down a car, you don't need to be able to explain how ChatGPT works, but just like you understand that using the accelerator makes your car go and the brake makes your car stop, you need some base idea of what's happening with your computer and the software you use.

We see this in just about every aspect of our lives. I wouldn't be able to file taxes for a multi-million dollar business with 300 employees, but I can file taxes for myself with a little bit of effort and reading (in fact, even trained professionals struggle with filing taxes, so maybe this wasn't the best example...).

I think you should have your wife make an account on OpenAI, and subscribe to ChatGPT, and try it out. Sometimes I literally just talk to it when I want to learn things.

I've never had to use Microsoft Outlook, ever, until I started working at my current job. I didn't know what an OST file was, or why they get fucked up so often, so instead of Googling and reading a bunch of shit, I just asked ChatGPT, "What is an Outlook OST file?" and it told me. And I went down an Outlook rabbit hole from there.

And the 4.0 and 5.0 version are just going to be that much more spectacular. I think it's better for her to embrace this right now and get on the other side of this train than be caught in the tracks when it arrives at her station.


I just asked ChatGPT, "What is an Outlook OST file?" and it told me.

Wow, I'm happy for you.

You realize though that the kinds of questions academic research tries to answer are a bit more nuanced than that, right? And not only that -- almost by definition -- when doing this kind of research, you don't just want to copy-pasta whatever turns up on the top of any automated search you do (whether it be through a search engine, or a chatbot).

Right?


I think it's you that may not be realizing some things. There are a lot of bullshit degrees and bullshit fields that ChatGPT is going to destroy. And it cannot happen soon enough.

For the actual rigorous fields that require some work and effort, ChatGPT wasn't designed to replace those researchers. It wasn't designed to make sense of the data that that kind of academic research produces. It's designed to help you take the bullshit work out of the sense you've already made out of this research.

It's an assistant. That's all it is... right now.

It will eventually, and probably sooner than later, be much, much more.


> University professors, high school teachers, middle school teachers - everyone - are all in for a rude awakening. You're going to have to get to know your students. You're going to have to scale down class sizes, because you have to now invest time in learning someone's writing style to know what's ChatGPT and what's their own voice.

I’m pretty sure this is not a rude awakening. Everyone would like to work more closely with fewer students. The ones in for the rude awakening are the ones who will be paying for more instructors.


> The ones in for the rude awakening are the ones who will be paying for more instructors.

That's not going to happen. Too many free alternatives that can get the job done, and people are waking up to that.

*Administrators* might have to go find a way to be productive for once... but people are fed up with the insanely high costs of higher education. Especially given what it's producing.


I've already integrated ChatGPT into my workflow

And your line of work is?


Some of the cheating using AI methods right now is very obvious because the writing is so poor. (For example, it's making things up, falling flat on argument, or using a grade school essay style.)

But what happens if generated poor content like this becomes a high percentage of what people are reading?

Will people start unconsciously mimicking the styles in their own writing? (This might be at least a temporary effect, as a technical writer mentioned they'd seen in their own work during a period they were reading Hunter S. Thompson heavily. Getting a bit punchy for tech doc.)

And will people's cognitive and expression abilities outright decline? (Perhaps because they're exposed to lots of generated examples of argument and other communication, or because they leaned on an AI-method tool, rather than go through the thinking and perhaps learning exercise themselves?)

Maybe these effects will begin, and even be noticed, but rather than try to fix it, society will just lower its standards, and then others will try to profit on the problem existing and continuing to exist? (Look at journalism and political discourse. Or at software development.)


I truly think this will happen.

Most people I know, smart people included, really struggle with writing. People struggle to clearly communicate ideas, express themselves, form sentences and more.

I'd say the majority of people I work with are actually poor communicators, especially via text (email, sms etc). Writing is a skill and no one is "perfect" at it, including me. It's only going to get worse without practice. I know this because I'm learning another language and if I take even a few weeks off, I lose it pretty quickly for being at an intermediate level.

There was a period in my life where I picked up a pen to write something and I realized that it had been so long since I used one I was actually intimated and struggled. I was too used to keyboards. I don't see outsourcing writing itself to ChatGPT being much different.

I cringe a little when I read all these messages like, "Oh, I just use ChatGPT to respond to everything now and write all my code etc". Well that's good it's saving you time, but you're also not exercising your ability to think anymore, and that will be detrimental. Many of those people probably didn't have a yardstick to measure there own writing ability by in the first place and now just assume they're better off outsourcing to ChatGPT. It's a questionable position to take.

There might be a time in 10 years where people are told they should write because it's good for their brain, like how we have to tell people who don't do physical labor they need to exercise...funny stuff.

Lastly, there must be truly something to be said for the fact that ChatGPT, at least currently (maybe it will train against itself soon) is trained off human inputs. If that trend continues and everything is ChatGPT, well where does that leave us? With a stagnant, flat way of communicating with stale ideas?

I feel like everything is becoming a little dystopian lately, ha!


I’d be quite worried if I could automate my job with chatGPT. ChatGPT will never be worse than it is now, after all, and never be less user friendly either — if I was glorified UI for chatGPT, I guess my boss would be looking into cutting out the middleman.


I’m in grad school right now. I use chatgpt to convert first drafts into polished final drafts by asking the following questions

- is this text convincing? How can it be improved?

- rewrite the following passage for clarity. Target an audience of X level.

It’s like a more efficient version of grammarly, I don’t have any qualms with this use. Saves hours debating phrasing, and acts as a TA for getting initial feedback on whether you are missing the mark.


If you are in graduate school, you should learn how to write. You will not always be in the luxury to have an internet connection to clarify your point to your audience.


This argument is basically the same as teacher's argument that 'You'll not always have a calculator in your pocket'. And I suspect it will age equally badly.


That argument hasn’t really aged badly though. People do still study math in school..you still need to learn arithmetic. Not because you’ll necessarily need to be able to arithmetic without a calculator, but for the cognitive skills it’s training you in


I was also terrible at arithmetic, my major was physics :shrug: I was also terrible at cursive, and prone to getting lost prior to GPS. I also never learned to carve a spear without modern power tools.

I'm either a fool who's able to get by with modern technology - or we're freeing up mental capacity to work on new ideas. ChatGPT is yet another tool, one which is demonstrating productivity improvements comparable to the launch of the internet IMO.


This has not aged badly at all, you now have people who can not quickly add numbers together, or even figure out fractions. Example:

My mom (Rhonda, 64) is the shop steward at a deli counter and had this conversation with her fellow employee Janice (25y):

Janice: "Hey Rhonda, they want 2/3 of a pound of cheese. How much is that on the scale?"

Rhonda (mom): "Janice, that's .66 on the scale."

Janice: "Yea but why is it .66 on the scale if they want two thirds?"

Rhonda: "Janice, you don't have to worry about that, just know that two thirds is .66."

Janice: "Yeah, but HOW do you know that?"

Rhonda: "..."

I'm sure someone will say "Your mom is wrong because she should teach her fractions! What a huge jerk!" Note that this person has gone through high school, and children do this math regularly. What I am saying is, being numerate is useful at all levels of life, not just academic exercise.


people should learn how to use their computer everywhere.


Sure, let me bust my phone calculator out in a deli with customers while wearing food service gloves.

The model is very easy to break down, all to learn something ONCE.

Here's the difference: You know fractions everywhere and you don't have to think about it more than 2s without your phone because you are math literate. OR Every time someone asks you something, you take 5s to pull out your phone type the numbers (hopefully correctly), and you get your decimal answer that you can hopefully interpret.

Which do you think is a better employee overall?

I have a particular axe to grind on this one, as the response "well I'll just look it up!" is so glib it throws the entire house out with the bathwater and the baby inside :).

For day to day math, needing a calculator to do the basics is like walking instead of flying. Mastery of the basics really pays off, especially for young students.


I'd argue that using language is more innate than calculating things, and that practicing it may well contribute to being able to form more coherent thoughts/ideas. After all, language seems to be one of the biggest things setting humans apart from animals.


It's funny you are trying to use this argument with me :) the value of mental math was instilled in me as a young man. When I am with family and buying things and they read out fractions or add up costs, they've developed the habit to defer to me to do mental arithmetic because taking out their phones to add things is tedious. Mental arithmetic is easier and requires no time punching in numbers.

I have never used a calculator app on a phone, ever, at least I don't remember the last time I did. The only times where I need to add up a large set of numbers, I get in front of a computer and use a python repl or a spreadsheet, but even then I "pre-stage" the calculation by doing approximations in my head, so I'm sort of ready.

Not letting your tools think for you is how you distinguish a professional from someone who can just push buttons. Don't disservice yourself by allowing yourself to never really learn to the limit of your abilities.


"because taking out their phones to add things is tedious. Mental arithmetic is easier and requires no time punching in numbers."

This is a hilariously bad take. I'm sorry how long does it take to "punch in numbers"? Is it possible to fail touch typing class on a phone?

The value in understanding mathematics is not the ability to be able to recite the quadratic formula or to be able to perform long division in your head, it's knowing when to use the formulas and understanding their applications in your daily life and work.


Yes, it is quicker... I don't know if it is because you do not have such an experience to compare with but is quicker and easier. If your hands are full of things then it does save a few seconds.

>The value in understanding mathematics is not the ability to be able to recite the quadratic formula or to be able to perform long division in your head, it's knowing when to use the formulas and understanding their applications in your daily life and work.

You can do both, really. You should be able to recite the mathematics you use in your daily work without having to look it up. I've never met a good scientist who wasn't able to.

EDIT: okay, may be one exception is for example something like tedious calculations from QFT or something, but you should be able to gesture at general parts of it, you should be able recite certain simple equations or particular approximations. If you have to refamiliarize yourself with your work every time you look at it, I do not know how you can advance in your research.


I thought the same thing, then I also realized that might be nice if people actually enjoyed writing, enjoyed the process, enjoyed getting better at it and who knows, even...used their brains a little bit more.

Even though I find ChatGPT impressive, it still doesn't write what I want to it write in the way I want it written.

The same way Dall-E, Mindjourney doesn't really draw what I want it to draw. It draws something , but I can just accept it's "good enough", but it's a different experience to creating something yourself.

ChatGPT writes something, but it's not me. While it might save me time, it's not as fun.

I think learning to enjoy the process important because honestly, that is all there is to life.


That is what I consider part of the process. You learn to write because it is a requisite skill and part of your craft, but eventually, as a professional, it becomes part of what you use to distinguish yourself. It is a part of your professional persona, in the same way writing in creative senses (blogs, novels, etc) is.

It's one thing to automate dry responses (email bots have been doing it for decades now) but automating writing in your career work leaves little left of your career, and further, it dilutes what you add in value to everyone else you work with.


Wow, props on getting that username!


Sounds like they're just using it to touch up an existing draft. They've come up with the fundamental ideas and to me the writing is merely ornamentation and window dressing.

I used to be an ESL instructor in another life and as I've always said to my students, "it doesn't matter how many languages you know if you've got nothing to say."


aye - I learned that lesson the hard way many years ago. As with all things ChatGPT - it does better with an expert user. Prior to ChatGPT I learned how to write well both for my job, and then for grad school. ChatGPT is just a time saver.


LOL, anywhere on the planet where you would actually need to clarify a point to an audience, you'll have an Internet connection.

Do you honestly think you'll get into a philosophical debate so incredibly challenging that you'd need to fall back upon ChatGPT to argue on your behalf in the middle of say, the Australian bush??

And even if you did, RV Starlink is there to save the day.

ChatGPT doesn't "do work for you". ChatGPT is like having a subject matter expert by your side to guide you. And like every single person who has access to a subject matter expert, you're going to take away whatever you want, and leave whatever you don't. Lazy shitbags will be lazy shitbags with or without ChatGPT. But inquisitive people who want to soar with the eagles have an impossibly powerful tool at their disposal.


Your committee will be real impressed if you whip out a phone during your oral examination and ask to talk to ChatGPT first.


Then they will argue that the examination method is outdated


Oral or dissertation defense is a high stakes version of things you tend to do a lot: giving talks, teaching, etc.


Until your phone dies. The point is some skills you either have, or you don't.


It sounds to me that they are learning how to write, ChatGPT is the tool that they are using to do so.


I think they know how to write if they're in graduate school.


> some people want to outright ban anything that "seems" it was written by a LLM

... The false positives won't be pretty. Especially for ESL students whose grasp of the language might make them sound a little too artificial.

> it's hard to know how much was written by GPT and how much was edited by the student

The smartest students won't even be suspected of using an AI model. They'll use the AI to do directed work, proofread and re-write as needed. Not really different than going through three revisions with different mentors for their college entrance essay...


As a recent graduate, I read plenty of my peer’s papers that were obviously written in 1 hour at 10pm, talking in circles with no substance. This is the level of output of these LLMs, and frankly I think all of them should be getting C’s.


> The smartest students won't even be suspected of using an AI model. They'll use the AI to do directed work, proofread and re-write as needed. Not really different than going through three revisions with different mentors for their college entrance essay

If that’s what they’re using it for I propose that they’re meeting the learning objectives just as much as someone who did those things in a study group with their friends. Of course how acceptable that is depends on the context of the work.


I think one of the other things that could be valuable would be to ask students to write fewer longer form assignments. ChatGPT and friends aren’t very good at staying on topic for all that long without being very repetitive. I think that’s probably a side effect of the number of tokens available. I don’t think that chatGPT could reliably write a detailed 5k word essay or more. It might be able to make some useful snippets to copy and paste, but I think the tone would be weirdly disconnected and it might make it more obviously not written by a person.

I’d be interested if anyone has tried this and what their experiences were.


I found it interesting that there start to be best-practices for professors and teachers on how to deal with ChatGPT in a class room [1].

One I really liked a lot: >And if you don't trust the AI to output correct content, teach your students critical thinking by using Assisted Teaching to write history essays and let your students find mistakes in it.

Another way (if you want to go the more traditional way), is the Flipped Classroom [2] method, in which students self-teach at home and do the "homework" in the classroom.

I agree with others here that there is no way to ban these technologies; there are just better and worse ways to deal with it.

[1]: https://assistedeverything.substack.com/p/the-age-of-assiste... [2]: https://en.wikipedia.org/wiki/Flipped_classroom


But there is also quillbot.com which is around for some time and used by students to paraphrase text which they copy & past from other sources. IMO this is worse.


Students should be rewarded for checking sources, and (moderately?) penalized for not doing so. I would say that at a minimum, every fictitious source knocks off a full letter grade. And make sure students know this. They will adapt. And it might get them to the sources, at least superficially (yeay?) to check for existence/non-existence.


Interview the student.

That's literally all it takes... but oh no, it's too expensive, must figure out how to automate testing... while fighting against the automation of answering.


Avoiding doing assignments by having chatGPT do them is obviously a bad idea. Cheating has always been possible, it always will be, we try to prevent it, but cheaters are really creative and mostly hurting themselves. Any effort spent on preventing students from ripping themselves off is effort not spent on helping the students who are engaging with the system in good faith and trying to learn.

It seems more troubling if people can cheat their way all the way through college. There are a number of skills that really benefit from kind of boring/non-creative work. In intro classes, part of it is just spending enough time learn “reflexes” that will help later. I’m sure it is easy to find tools that will do this work for you.

But at some point, I mean, junior and senior (Err, I guess this is US lingo maybe, last couple years of college) should be doing actually interesting work, to the point where if they can “cheat” their way through the material with new tools, then… they’ve found an interesting new way to solve real problems. And if they aren’t doing anything interesting at all, then there are much, much deeper problems at the school.


This only works until people notice that many at the top (company/society) were the cheaters.


honestly doing my assignments seems to have gotten many of my less honest peers better careers. (my smarter peers mostly did better too though)


How do they get to the top? If they’ve cheated their way through school, they shouldn’t have the skills required to do good work.


You're presuming a 1. a meritocracy, that only the most skilled people move up 2. that the skills needed for school are the same skills that allow you to move up. I don't think these are necessarily safe assumptions.


Why "should"? Isn't the important thing whether or not the student has the skills, not how much homework they did?

Because not all students pick up things at the same speed, and some are naturally gifted and don't need to through the same amount of homework to get to the same skill level.


> learn “reflexes” that will help later

True enough. I inadvertently learned trig so thoroughly that I could "see" many steps ahead as one. I say inadvertently because since every class was a math class, full of incidental trig work, this winds up happening to everyone.


I don't know how many HNers were in greek life, but every single house has a huge test and homework bank. It's one of the big draws for pledging.


I am very tempted to try writing my next paper (maybe a low stake arxiv publication) with ChatGPT.

Having done the research and gotten the results, asking it to turn draft paragraphs into properly written text.


You've done all the work only to be caught out by automated AI recognition[1] later.

Low stakes should probably be fine but to anyone using it for something important at least do the old rewrite-to-not-be-plagiarism trick.

I have to imagine there's going to be a lot of people regretting using GPT shortcuts soon. It'll be the equivalent of copy pasting Wikipedia by the end of the year I bet.

The next big Google search ranking update should be fun too with all the AI content mills popping up right now. If you see your rival using AI for their blog posts, start writing real content in preparation :D

[1] https://openai.com/blog/new-ai-classifier-for-indicating-ai-...


>our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives)

That is... not impressive. I'm honestly surprised that OpenAI was willing to release the tool in this state, it's probably the least reponsible thing they have done since calling themselves "OpenAI". Some kids are going to get expelled for no reason.


If schools start using it, solely it, for plagiarism/cheat detection then I'm right there behind you in the protest.

I'd be ok with it being a first line thing that causes a more fine tuned manual check and maybe a manual pop quiz on what the person 'wrote', though. Manual being the key word after AI does its bit.


These watermarking/"detect if someone is AI" techniques are trash, and OpenAI even admits it on their post about this. No proposed or implemented technique outside of this is even close to robust, reliable, or accurate.

It is trivial to get around even the best of these techniques. This is a cat and mouse game with a fat, lazy cat and really motivated, olympic tier mice.


I expect this from Reddit but surely Hacker News isn't judging future tech based on current prototypes? Haha

None of this existed at all 5 years ago. Let it cook.


Yeah, within a few years there will probably be tools that can reliably detect if a (long) passage was generated by chatGPT or not. I doubt they will be able to tell if the text was generated by whatever will replace chatGPT by that time.


Well then we wait another half year for the 'whatever will replace chatgpt' detector.

It's always been a to and fro between spam and ham. Always will be.

Unless you're releasing updates to your paper each time a new AI model comes out you're at risk of being caught out if anyone decides to do a "We found 80% of xyz to be AI generated" blog post.


just add "make some common spelling mistakes" at the end of your prompt :D


This is actually kind of disgusting, because they're simply playing both sides of the fence, and to no real point other than to make money.

This doesn't advance us as a species. ChatGPT does. The point of writing is to communicate ideas, thoughts, and feelings. It shouldn't matter if an AI took your ideas, thoughts, feelings, data, experiments, and then amalgamated it into a coherent work from which others can draw meaning.


AI detection is mostly useful to detect signals of fake news


I guess it will be cycle of finding a way around those classifiers and fixing classifiers.


Forget the papers, if this thing writes grant applications...


I am playing with it right now. Not for results and discussions, but it could be valuable for the introduction fluff, which is very tedious to write, and the methods and techniques, which is very repetitive and for which I sometimes struggle to avoid self-plagiarism and copying and pasting from previous articles.


Ugh, I tend to think of self plagiarism in methods and techniques sections as a feature, not a bug... No change means no change, after all; so long as the other paper is cited, it's clearer that the methodology is consistent.


There’s probably better writing tools to help with that than gpt.


Like what? Other than hiring a copy editor, of course.


Academia needs its own version of ChatGPT that parses through the student's assignments, and asks the student a series of AI generated questions based on their own work.

The question and answer session would be done through a video session and recorded.

Seems fairly simple. Answer the question. 10.... 9.... 8.... 7...

Students who wrote their own assignments would be able to answer the questions and those who didn't, couldn't.


I thought this would talk about ChatGPT in academic research writing i.e. papers.

Many academics are really smart but not very good at explaining things, especially when using a non-native language. Many papers don't get great ideas don't get those ideas the reach they deserve simply because they are too dense to understand, not just in the content itself, but in the overly-verbose and convoluted writing.

If ChatGPT helps academics write better papers it would be amazing. And presumably these papers' logic is actually checked by people (even if they don't go through peer-review) so potential BS is not a problem.


ChatGPT has been great at creating summaries for me, even for pieces that I've written.

I use ChatGPT to riff off ideas and make sure I haven't missed anything; often ChatGPT reminds me that "oh right I'm missing aspect X Y and Z which would make my project or writing more complete.

With that said, I haven't written anything published in a journal yet, and maybe I'll use ChatGPT to assist (probably in the same way as I mentioned, as a way to riff off and brainstorm ideas, rather than as a content generator; its output is way too boring and verbose for my linking)


I'm extremely verbose and proper in written communication by nature, so none of my clients have even noticed that I've been using ChatGPT in my workflow for the past three weeks.

Personally I couldn't be more tickled shitless at this creation. It's cheap ($20 a month!!), it's fast, it's already learned my writing and coding style, and it is probably saving me 30-40% time every week. The amount of projects I've been able to start working on, that I had on the back burner, is just incredible.

I really could not be more excited about where GPT 4.0 and 5.0 will go.


>its output is way too boring and verbose for my linking

Have you tried adjusting your prompts? Even just a single instruction to 'be concise' will make a difference. For some applications, you might be able to give examples fo the writing style you are looking for.


Huge business opportunity for OpenAI in selling GPT detection to teaching institutions?

Give away the poison, sell the antidote.



At some point the bibliography needs to reference the LLM itself which would need to be hosted indefinitely.


I do not think it is feasible to make sure prompts are reproducible. Considering that a LLMs are large you can not host every version of the model indefinitely.


ChatGPT when asked this question answers that its responses are probabilistic, so the responses aren't reproducible. I tested that myself, of course. Since it gave me 2 different (but overall equivalent) answers from the same prompt I'd have to agree.


That’s because it’s configured to with non-zero temperature. I’d you use the underlying model API, or the playground, you can get repeatable results when temperature is zero.


Mostly, yes, but there have been reported cases where even with zero temperature, you get nondeterminism, probably due to accumulation of errors due to different operation order on floating points.


You can also generally set an RNG seed to get reproducible results.


if it's not reproducible, it's not science


Some would argue that, due to the unfortunately near-universal deprecation of paper authors after 70-90 years, the actual process of writing any particular paper is not and has never been reproducible. As opposed to experiments, which are reproducible and are not generally contained within the operating weights of a LLM nor the thoughts of a human.


Your observation, while accurate, is a tangent. The point of the bibliography in the context of an academic paper is to reference the academic merit of the work. In the case of science, this would be reproducible experiments (ideally).

Perhaps you would prefer to include the generated text source as an author.


I'd rather not include it at all, for exactly that reason. It's just a writing tool- the paper is either correct or incorrect on the same basis as any other paper. We include bibliographies to ensure that the relevant scientific data is present, but I don't think there's any reason to say that a non-reproducible abstract isn't science


It belongs in the acknowledgments, along with Bob’s wife who did a bit of proofreading, Steve McProfessor who had a chat with the authors once, and whatever software was used for the figures.


This. It’s crazy that people are thinking we should _credit_ the model as if it were an author. It’s a tool, and should be usable without limit.

It would however be nice to have a mode where any output that matches some existing text in the training set could be highlighted, to help one avoid unintentional plagiarism.


Plagiarism will undergo a change in definition.


Plagiarism used to be norm until mass similarity checking has been available.


In published writing?

If so, we could pattern match historically, and discover the base rate of plagiarism.


see here is the thing, you didn't need Chat GPT to cheat.

Nature's neural networks (also called humans) have been willing to provide cheating services for far cheaper than it costs OpenAI to run ChatGPT. So the question is, what were you doing then?

----

I'll share a personal anecdote on cheating and the futility of it. I am taking a course on SQL at Coursera, and at the end of the course there is a peer-reviewed project you have to do (you submit your project for others to mark, and you have to mark the project of four other students to pass).

The thing is, you can already find the entire project online, a few people have posted their project for people to see, probably to help build their profile (just google "yelp coursera").

And obviously what most students did was just copy paste the damn thing, I immediately recognized that when I was marking the other candidates' projects, they didn't even bother attempting to hide it.

The thing is, I am not doing this course because I am being forced to by work or anything, but because I wanted to learn. Therefore, copying pasting the project would have not achieved the goals.

See, I wouldn't even have needed google to cheat, my BiL is a computer scientist, I could have easily used him to cheat if that was the goal.

But I wanted to learn, so I did it on my own, as did many other students whose project I did end up agreeing to mark (I refused to mark the cheater's projects and just refreshed the tab)

And since I am on HN and in tune with the tech on goings, I already had applied for the new BingGPT Chat, so I used to that to help with my project.

----

I didn't just paste my project questions, but I used it like I would have used my BiL to help; I asked it questions when my code didn't run like it should have.

In other words, I would, say, google (and yes google, regular bing search is still bad) for questions like normal, but the answer from StackOverflow would be for, say, MySQL, so I would ask the BingGPT Chat to give me an example of doing the same thing in SQLite, which is what the course used.

It would then recognize that I couldn't use CTE, for example (limitation of the Coursera web portal, not SQLite, in this instance), and give me an alternate way to do the same thing, and now I could use the concept to answer my own question.

----

In other words, ChatGPT is now acting as that education sibling/relative you can ask questions of to do homework... just people always have been before the existence of computers.

The only benefit is that now EVERYONE has access to that educated "relative", not just the lucky ones. Now everyone can get a customized answer, and that's wonderful, imho. More importantly, is that you can now ask it follow up questions.

Don't blame ChatGPT for cheating, people who wanted to cheat always could.

----

The problem with citing ChatGPT is, has anybody ever cited their relative who helped with the project? or do you cite google the search string you used?

I understand what the author is getting at, but it's still strange. ChatGPT isn't the authoritative source, so why cite the prompt?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: