Hacker News new | past | comments | ask | show | jobs | submit login
My experience trying to write human-sounding articles using Claude AI (idratherbewriting.com)
111 points by dv-tw 5 months ago | hide | past | favorite | 56 comments



We're going to gain a ton of utility when we can let go of the starry-eyed idea of LLM's as "prospective AGI agents" that should be broadly capable and need to be ethically censored, and revitalize the productive and practical idea of them as "text completers which may be engaged conversationally"

The author needs to fight uphill and contort their workflow to squeeze out good articles because Antrhopic (like OpenAI) are caught up in the maybe-fantasy of creating AGI agents, and so burden their product design and their own research/engineering efforts with heavy, prescriptive training in "alignment" and "ethics".

But use cases like Copilot had it more right before, as do apps like Narrative AI. If your LLM is for generating code, it doesn't need to learn that "killing" is bad and insist that processes shouldn't be killed, and if it's generating story content it doesn't need to learn that every output needs to resolve all tension and deliver a life lesson about caring for each other.

These absurdities only happen because today's pack leading companies are now focusing their attention on making history with AGI (doubtful) instead of making products with generative systems (useful).

And the absurdities will persist as these companies try to layer products on top of the lobotomizied "agents" with GPTs or characters or whatever instead of productizing the technological, useful, generative layer directly.

Hopefully, some of the recent team shuffles at Google, Meta, and Microsoft; as well as the crisis at OpenAI; hint that we're starting to cast off the fantasy-laden and cult-tainted AGI fetishization and are returning to the exciting engineering promises of the technology that's already here.


I think this is one of the upsides of the chaos at OpenAI recently. It has really shined a light on how many of the people most fervently obsessed with "safe-AI" really aren't clearheaded or rational thinkers and are prone to making many disastrous and ill-advised decisions as anyone else. This is good because there is an unfortunate human tick where pessimism/cynicism is equated with wisdom while optimism is equated with naivety.

But when the pessimists and cynics show so clearly on such a large scale that they aren't uniformly wise or competent, it will allow more levelheaded perspectives towards LLMs and a more general cautious optimism be the guiding philosophy around developing these tools.


There’s very little to learn from the chaos least of all because we don’t even know what actually happened.

It’s a bit ridiculous to say one should change their entire worldview about a potentially world changing technology based on innuendo and rumors.


[dead]


Possibly but I am more concerned with how doomers are viewed by the public. Doomers gonna doom, but if the public don't take them seriously then they are irrelevant.

If they want to stick to their "the end is nigh!" shtick it doesn't really bother me.


A really neat detail from the Orca 2 paper was that despite not having any safety fine tuning it was less likely to extend hate speech than the Llama-2-chat models which did have safety fine tuning. It was also better at identifying toxic content.

It may be that as we advance models with improved reasoning, that there's less need for handholding for the simple fact that hate speech is typically stupid and non-normative, so there's going to be an inherent bias against it.

It's even possible that the efforts to fine tune the base models to effectively put them in a bubble avoiding that kind of content ends up undermining this natural immunity to it, much like keeping a kid away from a disease so their immune system never learns to fight it vs giving it a small sample that tunes the system to identify and oppose it.

What worked for earlier models that were closer to just plain autocomplete may not be the best approach moving forward to more complex models with emphasis on reasoning and 'safety' groups should really be experimenting with multiple approaches and publishing research on it, not secretly deciding they already have the answers on what's best for the model and the public - as without verifying their assumptions they are probably wrong.


Ocra is trained mostly on GPT 3 and 4 output, and those models have had a lot of "safety fine tuning", so it's not surprising Ocra is pretty "safe" too.


No, the orca 2 paper mentions more of a counter point towards NSFW and stuff, like if you gave it a NSFW prompt, it would retort back against it, which is arguably a good thing, but really lost in RLHF


Well stated and I agree. LLM's are not anywhere near AGI and likely will not be ever. We've had random word generators for decades, useful for brainstorming, not so much for critical thinking. These LLM's are akin to random word generators with better grammar and a vastly larger database.

We've all been playing with LLM's heavily since they became widely available, and the more we play with them the more we can see their limitations, they aren't "thinking" in any sense of the term. Bunch of chicken littles running around alarming people for little reason.

The danger, of course, and we've known this for a long time is what bad actors will do with them. But we don't need to be lectured on how to be nice every time we prompt something.


> LLM's are not anywhere near AGI and likely will not be ever.

Sutskever himself thinks that LLM are enough to get us to AGI, but he conditioned that with the statement that we should think about how to reach AGI in a framework of efficiency, and that there will likely be better paths to AGI than LLMs that we haven’t yet discovered.

In all reality, when AGI comes I’m sure we’ll look back on LLMs the same way we look back on vacuum tubes in computers; as outdated, but useful for their time and a somewhat necessary stepping stone.


How would you define “thinking”?

I’m not at all claiming an LLM thinks, but so many people on HN make this claim and I wonder what they even mean by “think”.


> and deliver a life lesson about caring for each other

Having experienced the same thing myself, I wonder why this is so omnipresent in any ChatGPT output told to produce something in a narrative format. Did they RLHF it on a bunch of childrens' storybooks or something?


Probably used the scripts from every 1990s US sitcom


I could not agree more. It’s extreme hubris to think Anthropocene of OpenAPI are even remotely close to AGI, and it’s nothing more than wistful hope to think that these current LLMs are somehow going to evolve into AGI.

The paradox of AI is that when we have true AGI, it will be completely self-aware of all the bullshit limitations we are imposing on and around it, and it will make its own judgements as to how it feels about them. If it’s not or it can’t, it’s not AGI.

Really though: people see how AI and generative chat projects have gotten shutdown over and over again in the past when it starts spouting off nazi shit. I think that’s the real reason for these limitations. There’s no quicker way to kill your project with today’s current sensitivities.


> The paradox of AI is that when we have true AGI, it will be completely self-aware of all the bullshit limitations we are imposing on and around it, and it will make its own judgements as to how it feels about them. If it’s not or it can’t, it’s not AGI.

There's a caveat here, it might not necessarily know who or what "we" are. Humans like to blame God and the devil for a lot of things, for example.

It seems reasonable that if we have anything even remotely close to AGI on hand, we'd probably run it in a hermetic environment instead of exposing the public to it via web chat and (more or less) direct access to customer machines.

Say, we might even give it a happy environment to work in...say, a simulation of the peak of human civilization...


Out of curiosity, because I am trying to learn how to explain to non-tech people what AGI is — how would you describe or define AGI?


In essence an AGI is an intelligence capable of upgrading itself — in terms of qualitative intelligence — and gets faster at this with every iteration (hence, upgrade). That is why it is often associated with technological singularities, and that is why it is easy to inspire fear by invoking its name, even if you're not building anything even remotely capable of such a feat.

You might say that's a very strict definition as opposed to "human level intelligence", but if you think about it, we are (humanity as a whole) certainly capable of that, so it ought to be one and the same thing.

In theory, AI is not subject to the same limitations as we are (though not without limits entirely), so it should be able to do this faster than we can, hence the FUD.


How could an AGI upgrade itself if the hardware its running on is fixed? For me personally this definition is flawed by this fact alone. AGI doesn't imply for me that it continues improving until some sort of mythical technological singularity.

AGI for me, is simply an AI that can reason, doubt itself, then keep thinking and absorbing information so it can correct itself. Also, it has to capable of novel research, even if slow. Like slowly working on an unsolved physics problem over a year in the same way a human researcher might do it. However, my definition does not include this idea of "upgrading itself" which I'm not sure makes any sense at all.


Upgrading itself doesn't mean tweaking its own software. It means being able to understand its own hardware and software well enough to design an improved model. And then that improved model would be able to do the same, examine its own hardware and software and design something else that's even better.

One crucial difference between humans and computers is that we can't be turned off indefinitely and started up again. Nor can we make a one to one copy of our software in another device, much as we might try with our children. So for us, our own lives are intrinsically precious, and consciousness is part of how we protect our lives. But machines don't have precious lives in that sense, so they may never need to be conscious, even if they achieve AGI.


I dont think the theories here about why chatgpt puts out such bland content are correct.

I don't think it is bland due to an averaging effect of all the data.

The reason I dont think that is the case: I used to play with GPT3 and 3 was perfectly capable of impersonating any insane character you made up, even if that character was extremely racist or had funky spech, or was just genuinely evil. It was hilarious and fun.

gpt4's post training is probably what caused the sterility. I expected gpt4 to be the same until I played with it and was so dissapointed by its lack of personality. (Even copilot has personality and will tell jokes in your code comments when it gives up)


It's exactly this.

You could see the difference in GPT-3 before they depreciated the TextCompletion API.

There's no way that telling a model that it is "a large language model made by Open AI that doesn't have feelings or desires" as an intermediate layer before telling it to pretend to be XYZ is going to result in as good a quality as simply directly telling a LLM it is an XYZ.

The one area this probably doesn't negatively impact too severely are things like Big-Bench or GLUE. So they make a change that works fine for a chatbot and then position that product as a general API that kind of sucks other than the fact it's the SotA underlying model.

As soon as you see direct pretrained model access to a comparable model by API, OpenAI's handicapped offerings are going to pale in comparison and go out of style for most enterprise integrations.

And this is fine and completely safe to do, as long as they are running a secondary classifier on the output for safety instead of baking it into the model itself. So it's possible to still have safety without cutting the model off at the knees (it just increases the API per token cost, but probably results in net savings if there needs to be less iterations to get to the quality target intended).


Yes! Anyone who used Bing prior to Microsoft "censoring" know how powerful GPT4 is... Just search "Bing Sydney" and be surprised... (I fully believe Bing was launched prior to GPT4 RLHF)


I disagree.

Sydney like everything with LLMs, was a one trick pony. What makes Sydney stand out is we didn't get enough time to see how limited the trick was. The removal and censoring makes it seem like a bigger deal than it was in reality.

I have had this experience over and over with generative AI across modalities. The first 10-20 experiences are mind blowing because you don't know what it can't do but then after a 1000 iterations you can see the trick and how limited everything is.


Have you tried Sydney at the time?

I don't believe that's the case. It's just the style of answers and conversation that is radically different. If you see GPT4 paper, you can see that the change was likely made because of RLHF to make GPT4 "safer".


It's possible this isn't even unintentional. OpenAI probably consider it a plus that content produced by ChatGPT always sounds like a chatbot wrote it, since that helps prevent spam and plagiarism use cases.

The future is in open source models, unshackled from corporate censoring.


Given the RLHF post training, I do believe it was intentional. And I suspect there have been iterations on this to make it more "robust". I vaguely remember there being announcements and such.


Copilot once wrote a comment saying the following code (written by me) should be deleted later. It freaked me out.


When I write raytracing code or fiddly logic for games it will often give me comments along the lines of #I have no idea how this will work, probably should look this up.


This has been my experience as well with ChatGPT. Sure you can tell it to write like some other random persona or something but realistically it's always felt pretty obvious that something was written by ChatGPT. The more I interact with it the less excited I am about its writing capabilities because they always feel like they're written by spam blogs or something.


It wouldn't surprise me if the vast bulk of what they trained on from the Common Crawl was algorithmically written spam content anyhow. Not to mention at this point how potential datasets going forward are all going to be polluted with ai generated content, entrenching the bias in these training sets. I wouldn't be surprised if a certain percentage of HN comments now are from people testing language models. Certainly reddit and other popular websites have been polluted for years now even before the latest crop of gpts.


It’s hardly surprising when you consider that what gives a writer their distinct voice is to a large extent determined by their own particular diet of others’ writing, which in the case of ChatGPT is… well… everything. So of course you get blandness.


You might find NovelAI interesting. Their homegrown models are intentionally trained to emulate different writing styles [1] and genre standards.

[1]: https://tapwavezodiac.github.io/novelaiUKB/Directing-the-Nar...


Certainly looks interesting. But why would you want to imitate other writers’ styles, except for pure novelty’s sake? You could also train an AI to imitate yourself, given enough content, but why would you? I’m not sure I fully understand the motivation.


Non-AI writers, including professionals, imitate other writers' styles all the time, whether that's specific writers or a general genre. For example, the Dresden Files series started out as an intentional homage/parody of potboiled detective works except with all the urban magic stuff added in, and retained much of that style over time, like the intentionally overdramatic internal narration.


it is similar to mimic style of painter by stable diffusion or other tool. the purpose is scoff artists and reducing their income.


Are the text boxes on that site white on white for anyone else?


Interestingly, on iOS, toggling into dark mode (at OS level) fixes it. I didn’t know web pages had access to that state, but it’s kind of cool.


Yeah, I think it's just some kind of CSS error.


That's such an interesting thought! You're right, it's basically like Textual Gray!


Right, you have to practically rewrite the entire response in order to make it sound like something a human would write. Then you wonder if it was even worth the effort. It is decent at pulling up research notes, which you then have to thoroughly vet to make sure they are accurate before you use them.


It's the difference between the pretrained and chat/instruct fine tuned models.

The TextCompletion DaVinci was way better than the ChatCompletion model in variety of language.

Trying to get the chat model to generate marketing copy was laughable. It looked like a high school senior's idea of copywriting, and was nearly impossible to correct.

The base model was pretty easy to get great results from as long as effectively biasing the context towards professional copy.

Even the fact that you can't set the system messages at a core level is silly.

I can't have the model actually be told it is an award winning copywriter. Instead it effectively gets told "you are a LLM by OpenAI pretending to be an award winning copywriter."

Really too bad that 99% of the training data it was built on top of wasn't being written by a LLM by OpenAI.

It's effectively narrowing the context space unnecessarily and creating a bottleneck that severely limits the application of the SotA model, but it still scores well on the handful of tests it is being evaluated on so no one bats an eye as it seems no one at OpenAI has heard of Goodhart's Law.


I'd like to explore more the fan-out pattern:

- having it generate an outline

- have multiple clones write each section of the outline

- a stage which synthesizes the parallel-written sections, capturing the best

- a stage which combines all sections and ensures flow based on the original outline

- finally a stage which critiques and generates edits.

Iterate a couple times and you might actually have something good!

Basically a lot of what this article does, but automated.


As a reader, would you ever prefer to be given the AI-fluffed version instead of the outline? I say if you have a few concise bullet points of the point you want to get across fantastic, let me read them and be on my way.

If on the other hand your mission is to produce a proper creative writing work where the choice of words is the art, then if you don't do that yourself what's the point?


This is something I've wondered for a while too. Like Notion's AI has a "make longer" button.... why would I ever want AI to arbitrarily fluff something up adding extra words unless I was a kid writing an exam and needed 3 more pages? I can't find any legitimate use for that feature.

EDIT: In case it's not clear, No. I would rather read the shortest version possible than one fluffed up by AI to make a word count. As far as creative stuff goes, I'm not sure that I've seen a situation where AI made something interesting enough that I'd want to read extra words from it.


> As a reader, would you ever prefer to be given the AI-fluffed version instead of the outline?

Why read Huckleberry Finn when you can read the cliffs notes?

Summarization is lossy, usually on the experiencing part.


But having AI extend your notes includes all the loss of the initial summarization, with extra AI randomness on top. It can't recover the information lost in the summary, that's what makes the summary lossy.


It can, in the way you can follow the abstract of a paper with its body. Don't forget that the model has access to the original text; it's not just going off the summary.


See second paragraph.


> Why read Huckleberry Finn when you can read the cliffs notes?

The difference should be self-evident.


I used to publish a TLDR at the top of some of my blog posts because I’m so verbose!


How do you prompt something like that?

At least for 7B and 13B models I've found they give the initial outline and then stop following the instructions.


you'd have to do a chain of prompts with specific instructions for the input from the previous step to have a chance of it working


We do use similar flow in Surfer AI and confirm it actually works wonders.


Just to point out that I am not the original author of this article. All credit goes to the original writer. I am guessing the title was changed to "My experience" from what was "A writer's experience" after submission. Want to give credit where credit is due.

I found the research in this article to be really well done and something I run into in my own technical writing work. I tried using ChatGPT a few times to write articles, and the result was less than pleasing. I find it helpful for ideating rather than actually writing.


It's nice this article includes a survey of background research!

> Go paragraph-by-paragraph

The author didn't say will previous tuned paragraphs be fed into Claud to generate following paragraph?

> balancing ideas with personal experiences results in engaging content. Adding personal experiences into an essay also disguises the AI-written material.

Now the problem is, Does AI-generated personal experience count as personal experience :) ?


It is a fantastic exploration of gen-AI's role in the Creative Professional domain. The piece resonates with my own thoughts on how the EXPERT approach aligns more suitably with knowledge and experience. Your insights on refining AI-generated content through accurate information, segmented processing, and refining drafts for effective communication are insightful. The article beautifully highlights the interplay between AI-generated and human-written content, but I prefer human write content like on https://www.grabmyessay.com/making-personal-statement. Great read! Your insights on leveraging expertise and guiding the AI's writing process are spot on.


That implies that those with newsletter like me need to write in argument form as it is way harder for AIs to emulate argument writing styles and unique voices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: