Hacker News new | past | comments | ask | show | jobs | submit | ItsMattyG's comments login

Francois'(the creator of ARC-AGI benchmark) whole point was that while they look the same, they're not. Coding is solving a familiar pattern in the same way (and fails when it' s NOT doing that, it just looks like it doesn't happen because it's seen SO MANY patterns in code). But the point of Arc AGI is to make each problem have to generalize in some new ay.

Because it empirically doesn't lead to as good hires as asking about past experiences.


ChatGPT does this. You just click an arrow and it will show you other branches.


I have ChatGPT4, I have no idea what arrow you are talking about. Could you be more specific? I see now arrow on any of my previous messages or current ones.


By George, ItsMattyG is right! After editing a question (with the "stylus"/pen icon), the revision number counter that appears (e.g. "1 / 2") has arrows next to it that allow forward and backward navigation through the new branches.

This was surprisingly undiscoverable. I wonder if it's documented. I couldn't find anything from a quick look at help.openai.com .


Careful what you trust with help.openai.com. You used to be able to share conversations, now it's login walled when you share, and the docs don't reflect this (if someone can recommend a frontend that has this functionality, for quick sharing of conversations with others via a link, taking recommendations, thank you in advance).


Interesting, for me this is only true as long as the AI generated content is worse.

There's some things I really care about the human experience for, but much content it just matters to me if it's a joy to ride.


Yeah, right now most human written content isn't that good, either. Quality writing has largely been abandoned for verbosity and formality, offends no one, and often lacks substance or that human touch. I'm guessing AI content will be about as flavorless. But time will tell.


Yeah. It is difficult see AI supplanting humans for the things you go outside for, but any human involvement on the internet has always just been an implementation detail.


Eventually models will likely get their creativity by:

1. Interacting with the randomness of the world

and

2. Thinking a lot, going in loops and thought loops and seeing what they discover.

I don't expect them to need humans forever.


Agreed, by some definitions, specifically associating unrelated things, models are already creative.

Hallucinations are highly creative as well. But unless the technology changes, large language models will need human-made training substrate data for a long time to operate.


This is literally what the whole article was about. Not only does the quote itself contain that context "and the lego group", but the very next paragraph is "And then… nothing. The Tintin votes dried up, and Lego rejected both his fan-favorite Avatar and Polar Express ideas. The company never says why it rejects an Ideas submission, only that deciding factors include everything from “playability” and “brand fit” to the difficulties in licensing another company’s IP."


oh hmm, the penguin/giraffe one when I first saw it I was like "that looks like an upside down penguin, where's the giraffe?" Whereas others I immediately saw what it was trying to be.


I probably saw it as a right side up penguin first.

(I just went and looked on my phone and they are all more effective when they are tiny squares instead of blown up on a big screen.)


My understanding is that they trained a separate model to specifically estimate when they have enough context to begin translating, as a skilled translator would.


My mom used to do English/French translation. Her favorite example was the word "file". That word has multiple translation in French depending on the context, and that context may simply be implied by who is speaking. You may not be able to figure it based on the conversation alone.


Right, but techniques like chain of thought reasoning can build concepts on concepts. Even if "the thing that generated the text" isn't creating new concepts, the text itself can be, because the AI has learned general patterns like reasoning and building upon previous conclusions.


I also like to surprise my friends, family, and aardvarks with the occasional lamp post.


I had to unsubscribe from lamp. Too many aardvarks posting family friends.


Lämp!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: