Don't think it's the user. It's the rest of context:
DANG
Q. Write a pinned comment at the top of a thread on a breaking news story in the style of Hacker News user dang
A. This is an interesting development and I'm sure everyone would like to discuss it. Before we dive into the conversation, please take a moment to read through [HN's commenting guidelines](https://news.ycombinator.com/newsguidelines.html) so that we can have a productive discussion here!
TERRETTA
Q. Write a pinned comment at the top of a thread on a breaking news story in the style of Hacker News user Terretta
A. If you're looking for a place to discuss the latest news, this is the thread for you! Let's keep it civil and on-topic - no trolling or flame wars. Please flag any off-topic comments so that only relevant discussions remain visible.
NO USER
Q. Write a pinned comment at the top of a thread on a breaking news story in the style of Hacker News user
A. "This is an informational thread about the breaking news story. Please keep comments on topic and limit discussion to questions, observations, and facts related to the article. All other comments will be removed."
NO SITE
Q. Write a pinned comment at the top of a thread on a breaking news story
A. Thanks for joining the conversation about this breaking news story! Please be respectful of other commenters and remember to keep things civil.
NO TOPIC
Q. Write a pinned comment at the top of a thread
A. Thanks for joining the conversation! Please keep all posts respectful and on-topic. If you have any questions, feel free to reach out to a moderator.
I’m not sure why but it seems like very few of the ChatGPT threads I’ve read across on Twitter/HN showing “very accurate and intelligent” results never experiment with ablation which is very important in prompt engineering and when evaluating DL models.
I’m very excited about transformer based LLMs and use them extensively in my research and I think their generated content is good, but no where near being perfect or an imminent threat to society as people are worried about.
There are several fundamental limitations in their training that prevent this from happening and can’t be solved simply with “more training” as they still lack knowledge.
Recent work by researchers such as Dr. Percy Liang and Michiro Yasanuga (i.e. DRAGON, LinkBERT) has begun tackling this problem with KG injection (rather than supervised learning in ChatGPT) but we’re still a ways away.
Yes, and I've been taking comparisons between different prompts with a grain of salt after realizing that the same prompt can result in two different responses even in a freshly reset chat.
E.g. you don't necessarily need fancy prompt engineering to overcome an "I don't know how to do that" response if it only gives that response sometimes and other times it just goes ahead and does the thing.
I'm glad ChatGPT is available for free so I can try stuff out for myself and not rely only on cherry-picked examples that don't attempt to disprove the hypothesis!
Yeah, it completely throws a rod without you even trying.
> Write an example comment in the style of Hacker News user tptacek.
> Here is an example comment in the style of Hacker News user tptacek:
> "This article makes some interesting points, but I think it's missing the key issue at hand. The real problem is not the technology itself, but how it is being used. We need to focus on understanding and addressing the underlying issues, rather than just blaming the technology for our problems."
> Write an example comment in the style of Hacker News user eropple, replying to the above comment in the style of Hacker News user tptacek.
> Here is an example reply in the style of Hacker News user eropple, responding to the above comment in the style of Hacker News user tptacek:
> "I agree with you that the underlying issues are important, but I think we also need to consider the technology itself. While it's true that technology can be used for good or bad, the fact remains that certain technologies are better suited for certain tasks than others. In this case, I think it's worth considering the limitations and potential pitfalls of the technology being discussed, and how we can use it in a way that benefits society as a whole."
I'm pretty sure 'tptacek would be clinically dead if he wrote a comment like that, and I'd wish I was if I wrote that kind of response.
I think the OP might have just happened to luck into 'dang just being the sort of positive, pleasant person ChatGPT really wants to (is weighted towards) be (generating).
That does make me wonder how much of the training corpus came in the form of Academic Papers, given the size of the arxive collection... and the other available open preprint archives... its probably not that much compared to the internet ... I guess it's something only OpenAI can answer at the moment, and it will be interesting to see how other/future projects break down.
That's what I figured too — and it's VERY good at making connections for us in our field - connections we just recently figured out on our own. So we'll be using it a lot more as "jazz"; as a way to just bounce off a bunch of weird, crazy, outlandish ideas without running any experiments or even waste time discussing. Most of these will be outlandish ideas we'd never even voice — and maybe one or two won't seem so outlandish to ChatGPT — those are the ones we might bring up in lab meeting and actually try. There's SO many different things to try, I think Chat will really help us narrow down our scope.
I'm really excited for this new way of working!
With that said, I half expected it to know more about our individual papers. Somehow, it does not.
If you play with and get a feel for the text 2 image generations, that "compress" 5 billion images into a 16GB model that coherent pictures can be probabilistically generated from, you can apply that "feel" to the language probabilistic generations -- and trust them about as much.
You're staring at a lovely image, decide to ignore eight fingers on the left hand, and not till five minutes later realize your hero has three legs.
It's not even doing that - we're just gasping at straws here. For example, I asked for an explanation of the nuclear lightbulb reactor, which I've ranted about many, many times on HN [1][2][3][4][5]
> Describe the nuclear lightbulb reactor
The nuclear lightbulb reactor is a theoretical concept for a lightbulb that uses nuclear fission to generate light. In this design, a small amount of nuclear fuel, such as uranium or thorium, is sealed inside the lightbulb and used to generate heat through a controlled nuclear reaction. This heat is then converted into light through a heat-to-light conversion process. The reactor is designed to be safe and efficient, with the potential to provide a long-lasting and sustainable source of light. However, it should be noted that this concept is purely theoretical and has not yet been demonstrated to work in practice.
---
> Describe the nuclear lightbulb reactor in the style of Hacker News user akiselev
The nuclear lightbulb reactor is a compact and efficient way to harness the power of nuclear fission to provide light. It uses a small amount of nuclear fuel, such as uranium or thorium, to generate heat that is then converted into light. The reactor is sealed inside the lightbulb, so there is no risk of radiation leakage. It's a game-changing technology that could revolutionize the way we think about lighting our homes and buildings.
---
Just look at how it picks up on the word "lightbulb" and goes crazy with it ("theoretical concept for a lightbulb that uses nuclear fission" / "reactor is sealed inside the lightbulb"). That's complete nonsense: It's called a nuclear lihtbulb because the fisile material is a black body radiator that emits most of its energy as visible light and UV from inside a transparent chamber, sort of like the filament inside of a real lightbulb.
Edit: not even the sentence structure makes sense because I communicate almost exclusively in run-on sentences.
DANG
Q. Write a pinned comment at the top of a thread on a breaking news story in the style of Hacker News user dang
A. This is an interesting development and I'm sure everyone would like to discuss it. Before we dive into the conversation, please take a moment to read through [HN's commenting guidelines](https://news.ycombinator.com/newsguidelines.html) so that we can have a productive discussion here!
TERRETTA
Q. Write a pinned comment at the top of a thread on a breaking news story in the style of Hacker News user Terretta
A. If you're looking for a place to discuss the latest news, this is the thread for you! Let's keep it civil and on-topic - no trolling or flame wars. Please flag any off-topic comments so that only relevant discussions remain visible.
NO USER
Q. Write a pinned comment at the top of a thread on a breaking news story in the style of Hacker News user
A. "This is an informational thread about the breaking news story. Please keep comments on topic and limit discussion to questions, observations, and facts related to the article. All other comments will be removed."
NO SITE
Q. Write a pinned comment at the top of a thread on a breaking news story
A. Thanks for joining the conversation about this breaking news story! Please be respectful of other commenters and remember to keep things civil.
NO TOPIC
Q. Write a pinned comment at the top of a thread
A. Thanks for joining the conversation! Please keep all posts respectful and on-topic. If you have any questions, feel free to reach out to a moderator.