Francois'(the creator of ARC-AGI benchmark) whole point was that while they look the same, they're not. Coding is solving a familiar pattern in the same way (and fails when it' s NOT doing that, it just looks like it doesn't happen because it's seen SO MANY patterns in code). But the point of Arc AGI is to make each problem have to generalize in some new ay.
I have ChatGPT4, I have no idea what arrow you are talking about. Could you be more specific? I see now arrow on any of my previous messages or current ones.
By George, ItsMattyG is right! After editing a question (with the "stylus"/pen icon), the revision number counter that appears (e.g. "1 / 2") has arrows next to it that allow forward and backward navigation through the new branches.
This was surprisingly undiscoverable. I wonder if it's documented. I couldn't find anything from a quick look at help.openai.com .
Careful what you trust with help.openai.com. You used to be able to share conversations, now it's login walled when you share, and the docs don't reflect this (if someone can recommend a frontend that has this functionality, for quick sharing of conversations with others via a link, taking recommendations, thank you in advance).
Yeah, right now most human written content isn't that good, either. Quality writing has largely been abandoned for verbosity and formality, offends no one, and often lacks substance or that human touch. I'm guessing AI content will be about as flavorless. But time will tell.
Yeah. It is difficult see AI supplanting humans for the things you go outside for, but any human involvement on the internet has always just been an implementation detail.
Agreed, by some definitions, specifically associating unrelated things, models are already creative.
Hallucinations are highly creative as well. But unless the technology changes, large language models will need human-made training substrate data for a long time to operate.
This is literally what the whole article was about. Not only does the quote itself contain that context "and the lego group", but the very next paragraph is "And then… nothing. The Tintin votes dried up, and Lego rejected both his fan-favorite Avatar and Polar Express ideas. The company never says why it rejects an Ideas submission, only that deciding factors include everything from “playability” and “brand fit” to the difficulties in licensing another company’s IP."
oh hmm, the penguin/giraffe one when I first saw it I was like "that looks like an upside down penguin, where's the giraffe?" Whereas others I immediately saw what it was trying to be.
My understanding is that they trained a separate model to specifically estimate when they have enough context to begin translating, as a skilled translator would.
My mom used to do English/French translation. Her favorite example was the word "file". That word has multiple translation in French depending on the context, and that context may simply be implied by who is speaking. You may not be able to figure it based on the conversation alone.
Right, but techniques like chain of thought reasoning can build concepts on concepts. Even if "the thing that generated the text" isn't creating new concepts, the text itself can be, because the AI has learned general patterns like reasoning and building upon previous conclusions.
reply