Google recently announced that they will be shutting down Google Bard, their new AI-powered writing tool, after less than six months since its launch. The tool, which used natural language processing to help users write poetry and song lyrics, was met with mixed reviews and failed to gain significant traction among users. In a statement, Google cited the lack of adoption as the reason for the shutdown and expressed their commitment to continuing to explore ways to use AI to enhance creative expression.
I asked chatgpt to make it a full article and posted it on my satirical tech news website. I'm curious to see if Bing or Bard end up using it as a source.
It would be even more hilarious if mod scoldings made it rate the parent higher. But the text it quoted was from https://news.ycombinator.com/item?id=35247109 - maybe it treats subthreads as a single thing?
Reflexive "Google will just shut it down" reactions were already a cliché 10 years ago. It's been tedious for a long time, and therefore is not driven by curiosity, and curiosity is what we're optimizing for. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
Google really does shut things down—I know! But for a good HN comment, it's not enough to say something true—one should say something true that hasn't been repeated a thousand times already.
I have no idea about you being recorded, but I have to admit I’m sincerely not understanding the moderation still. Is it really not enough to say something true? That saying something true which has been true for 10 years warrants moderation? That’s surprising, and doesn’t correspond to anything I’m aware of in the posting guidelines.
When something has been repeated often enough, it becomes tedious and boring. That makes it off topic for HN. People repeat these things anyway, but for reasons other than curiosity. Since curiosity is what we're optimizing for, we want to avoid that.
If that still doesn't make sense, consider this thought experiment: imagine a comment saying something true, and then another saying the exact same thing, and then another saying that thing, and another... now extend the sequence arbitrarily. At some point it becomes annoying and offtopic, no?
Another way of looking at it is this: when you hear a thing that you haven't heard before, that's what gratifies curiosity. In other words, diffs are what's interesting. Clichés have no "diff value" because everybody's already heard them many times. That's what makes them cliché.
Larger point of this is that these cliches do not contribute to healthier discussions but result in more of the same circlejerk, and hence they end up contaminating the whole thread with low quality drivel, drowning actual good quality comments and increasing the workload of moderators to no end.
It's actually a bit disturbing that it just believes whatever it finds, really no concern for what is a reliable source? Like at all? What did google work on then for all these years of secrecy? Couldn't they have released this thing years ago? I did expect bard to be awful but not that awful, embarrassing.
I suspect Bard is a small variant of their best model it seems too significantly behind, they must've severely crippled it. It can't even decode morse code, gpt4 can even draw you an SVG and encode it in base64, for example.
Unlike ChatGPT, and like Bing, Bard makes search queries for extra context before answering.
Of course, that leads to things like this where they can find articles referring to themselves, which also happened with Bing.
Do you know the details? Are they having a question answering model hidden inside that evaluates context returned from search and then ask Bard to rephrase it and merge with its generative hallucinations?