Hacker Newsnew | past | comments | ask | show | jobs | submit | fchollet's commentslogin

One interesting observation is that French-derived words in English tend to be fancier -- formal, sophisticated, higher-class -- while Germanic ones tend to be more casual, everyday vocabulary.


Many of these words transferred during the Norman Conquest. During that time, England was ruled by French speakers. The upper class and nobility in England were French (and French speakers).

When someone in the upper class wanted boeuf, they wanted the meat of a cow - not the cow itself. And so beef entered the English language as the meat. This extended to other animals. In general, the word for the meat in English is the French word for the animal and the word for the animal is derived from the German word.

https://www.etymonline.com/word/beef and https://www.etymonline.com/word/cow

This also extended to the language law and things that the upper classes (rather than the commoners) dealt with. When the common English (germanic) did have to deal with those topics, they used the French words and those words were brought into English.


I believe this is because the Normans were wealthier than the native Brits


My rough estimate is that words of two syllables or less are mostly Germanic and words of three syllables or more are mostly Romantic.


ça je ne crois pas


Um I meant words in English. Sorry..


only the peasants spoke Old English. The nobility spoke French. eventually the two languages merged into modern English.


You can easily convert these tasks to token strings. The reason why ARC does not use language as part of its format is that it seeks to minimize the amount of prior knowledge needed to approach the tasks, so as to focus on fluid intelligence as opposed to acquired knowledge.

All ARC tasks are built entirely on top of "Core Knowledge" priors, the kind of elementary knowledge that a small child has already mastered and that is possessed universally by all humans.


Can you explain to me? Would the token strings be as easy to solve for humans as well?

Or let me ask differently. Can we still design text questions that are easy for humans and tough for AI?


The reason these tasks require fluid intelligence is because they were designed this way -- with task uniqueness/novelty as the primary goal.

ARC 1 was released long before in-context learning was identified in LLMs (and designed before Transformer-based LLMs existed), so the fact that LLMs can't do ARC was never a design consideration. It just turned out this way, which confirmed our initial assumption.


Is there any other confirmation of the assumptions, other than the LLM behaviour, because that still feels like circular reasoning.

I think a similar claim could be levelled against other benchmarks or LLM evaluation tasks. One could say that the Turing test was designed to assess human intelligence, and LLMs pass it, therefore LLMs have human intelligence. This is generally considered to be false now, because we can plainly see that LLMs do not have intelligence in the same way as humans (yet? debatable, not the point), and instead we concluded that the Turing test was not the right benchmark. That's not to diminish its importance, it was hugely important as a part of AI education and possibly even AI development for decades.

ARC does seem to be pushing the boundaries, I'm just not convinced that it's testing a provable step change.


I'm not sure that's quite correct about the Turing test. From Wikipedia:

"Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward."


>> The reason these tasks require fluid intelligence is because they were designed this way -- with task uniqueness/novelty as the primary goal.

That's in no way different than claiming that LLMs understand language, or reason, etc, because they were designed that way.

Neural nets of all sorts have been beating benchmarks since forever, e.g. there's a ton of language understanding benchmarks pretty much all saturated by now (GLUE, SUPERGLUE ULTRASUPERAWESOMEGLUE ... OK I made that last one up) but passing them means nothing about the ability of neural net-based systems to understand language, regardless of how much their authors designed them to test language understanding.

Failing a benchmark also doesn't mean anything. A few years ago, at the first Kaggle competition, the entries were ad-hoc and amateurish. The first time a well-resourced team tried ARC (OpenAI) they ran roughshod over it and now you have to make a new one.

At some point you have to face the music: ARC is just another benchmark, destined to be beat in good time whenever anyone makes a concentrated effort at it and still prove nothing about intelligence, natural or artificial.


I mostly agree with what your are saying but…

> passing them means nothing about the ability of neural net-based systems to understand language, regardless of how much their authors designed them to test language understanding.

Does this implicitly suggest that it is impossible to quantitatively assess a system’s ability to understand language? (Using the term “system” in the broadest possible sense)

Not agreeing or disagreeing or asking with skepticism. Genuinely asking what your position is here, since it seems like your comment eventually leads to the conclusion that it is unknowable whether a system external to yourself understands language, or, if it is possible, then only in a purely qualitative way, or perhaps purely in a Stewart-style-pornographic-threshold-test - you’ll know it when you see it.

I don’t have any problem if that’s your position- it might even be mine! I’m more or less of the mindset that debating whether artificial systems can have certain labels attached to them revolving around words like “understanding,” “cognition,” “sentience” etc is generally unhelpful, and it’s much more interesting to just talk about what the actual practical capabilities and functionalities of such systems are on the one hand in a very concrete, observable, hopefully quantitative sense, and how it feels to interact with them in a purely qualitative sense on the other hand. Benchmarks can be useful in the former but not the latter.

Just curious where you fall. How would you recommend we approach the desire to understand whether such systems can “understand language” or “solve problems” etc etc… or are these questions useless in your view? Or only useful in as much as they (the benchmarks/tests etc) drive the development of new methodologies/innovations/measurable capabilities, but not in assigning qualitative properties to said systems?


>> Does this implicitly suggest that it is impossible to quantitatively assess a system’s ability to understand language? (Using the term “system” in the broadest possible sense)

I don't know and I don't have an opinion. I know that tests that claimed to measure language understanding, historically, haven't. There's some literature on the subject if you're curious (sounds like you are). I'd start here:

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data

Emily M. Bender, Alexander Koller

https://aclanthology.org/2020.acl-main.463/

Quoting the passage that I tend to remember:

>> While large neural LMs may well end up being important components of an eventual full-scale solution to human-analogous NLU, they are not nearly-there solutions to this grand challenge. We argue in this paper that genuine progress in our field — climbing the right hill, not just the hill on whose slope we currently sit —depends on maintaining clarity around big picture notions such as meaning and understanding in task design and reporting of experimental results.


The first time a top lab spent millions trying to beat ARC was actually in 2021, and the effort failed.

By the time OpenAI attempted ARC in 2024, a colossal amount of resources had already been expended trying to beat the benchmark. The OpenAI run itself costs several millions in inference compute alone.

ARC was the only benchmark that highlighted o3 as having qualitatively different abilities compared to all models that came before. o3 is a case of a good approach meeting an appropriate benchmark, rather than an effort to beat ARC specifically.


>> The first time a top lab spent millions trying to beat ARC was actually in 2021, and the effort failed.

Which top lab was that? What did they try?

>> ARC was the only benchmark that highlighted o3 as having qualitatively different abilities compared to all models that came before.

Unfortunately observations support a simpler hypothesis: o3 was trained on sufficient data about ARC-1 that it could solve it well. There is currently insufficient data on ARC-II to solve it therefore o3 can't solve it. No super magickal and mysterious qualitatively different abilities to all models that came before required whatsoever.

Indeed, that is a common pattern in machine learning research: newer models perform better on benchmarks than earlier models not because their capabilities increase with respect to earlier models but because they're bigger models, trained on more data and more compute. They're just bigger, slower, more expensive- and just as dumb as their predecessors.

That's 90% of deep learning research in a nutshell.


I'm sorry, but what observations support that hypothesis? There were scores of teams trying exactly that - training LLMs directly on Arc-AGI data - and by and large they achieved mediocre results. It just isn't an approach that works for this problem set.

To be honest your argument sounds like an attempt to motivate a predetermined conclusion.


In which case what is the point of your comment? I mean what do you expect me to do after reading it, reach a different predetermined conclusion?


Provide some evidence for your claims? This empty rhetoric stuff in every AI thread on HN wears me out a bit. I apologise for being a little aggressive in my previous comment.


There have been some human studies on ARC 1 previously, I expect there will be more in the future. See this paper from 2021, which was one of the earliest works in this direction: https://arxiv.org/abs/2103.05823


ARC 3 is still spatially 2D, but it adds a time dimension, and it's interactive.


I think a lot of people got discouraged, seeing how openai solved arc agi 1 by what seems like brute forcing and throwing money at it. Do you believe arc was solved in the "spirit" of the challenge? Also all the open sourced solutions seem super specific to solving arc. Is this really leading us to human level AI at open ended tasks?


Strong emphasis on "seems".

I'd encourage you to review the definition of "brute force", and then consider the absolutely immense combinatoric space represented by the grids these puzzles use.

"Brute force" simply cannot touch these puzzles. An amount of understanding and pattern recognition is strictly required, even with the large quantities of test-time compute that were used against arc-agi-1.


Also there's no clear way to verify the solution. There could be easily multiple rules which works on the same examples


It's useful to know what current AI systems can achieve with unlimited test-time compute resources. Ultimately though, the "spirit of the challenge" is efficiency, which is why we're specifically looking for solutions that are at least within 1-2 order of magnitude of cost from being competitive with humans. The Kaggle leaderboard is very resource-constrained, and on the public leaderboard you need to use less than $10,000 in compute to solve 120 tasks.


Efficiency sounds like a hardware problem as much as a software problem.

$10000 in compute is a moving target, today's GPUs are much much better than 10 years ago.


> $10000 in compute is a moving target

And it's also irrelevant in some fields. If you solve a "protein folding" problem that was a blocker for a pharma company, that 10k is peanuts now.

Same for coding. If you can spend 100$ / hr on a "mid-level" SWE agent but you can literally spawn 100 today and 0 tomorrow and reach your clients faster, again the cost is irrelevant.


Are you in the process of creating tasks that behave as an acid test for AGI? If not, do you think such a task is feasible? I read somewhere in the ARC blog that they define AGI as when creating tasks that is hard for AI but easy for humans becomes virtually impossible.


If you aren't joking, that will filter most humans.


They said at least two people out of 400 solved each problem so they're pretty hard.


I don't think that's correct. They had 400 people receive some questions, and only kept the questions that were solved by at least 2 people. The 400 people didn't all receive 120 questions (they'd have probably got bored).

If you go through the example problems you'll notice that most are testing the "aha" moment. Once you do a couple, you know what to expect, but with larger grids you have to stay focused and keep track of a few things to get it right.


> Who would be buying bitcoin right now?

Well, maybe the US government? What if the US starts dedicating 10-15% of yearly federal receipts to serve as exit liquidity for Bitcoin holders?


What all top models do is recombine at test time the knowledge they already have. So they all possess Core Knowledge priors. Techniques to acquire them vary:

* Use a pretrained LLM and hope that relevant programs will be memorized via exposure to text data (this doesn't work that well)

* Pretrain a LLM on ARC-AGI-like data

* Hardcode the priors into a DSL

> Which is to say, a data augmentation approach

The key bit isn't the data augmentation but the TTT. TTT is a way to lift the #1 issue with DL models: that they cannot recombine their knowledge at test time to adapt to something they haven't seen before (strong generalization). You can argue whether TTT is the right way to achieve this, but there is no doubt that TTT is a major advance in this direction.

The top ARC-AGI models perform well not because they're trained on tons of data, but because they can adapt to novelty at test time (usually via TTT). For instance, if you drop the TTT component you will see that these large models trained on millions of synthetic ARC-AGI tasks drop to <10% accuracy. This demonstrates empirically that ARC-AGI cannot be solved purely via memorization and interpolation.


>> So they all possess Core Knowledge priors.

Do you mean the ones from your white paper? The same ones that humans possess? How do you know this?

>> The key bit isn't the data augmentation but the TTT.

I haven't had the chance to read the papers carefully. Have they done ablation studies? For instance, is the following a guess or is it an empirical result?

>> For instance, if you drop the TTT component you will see that these large models trained on millions of synthetic ARC-AGI tasks drop to <10% accuracy.


>This demonstrates empirically that ARC-AGI cannot be solved purely via memorization and interpolation

Now that the current challenge is over, and a successor dataset is in the works, can we see how well the leading LLMs perform against the private test set?


I think the "semi-private" numbers here already measure that: https://arcprize.org/2024-results

For example, Claude 3.5 gets 14% in semi-private eval vs 21% in public eval. I remember reading an explanation of "semi-private" earlier but cannot find it now.


It is correct that the first model that will beat ARC-AGI will only be able to handle ARC-AGI tasks. However, the idea is that the architecture of that model should be able to be repurposed to arbitrary problems. That is what makes ARC-AGI a good compass towards AGI (unlike chess).

For instance, current top models use TTT, which is a completely general-purpose technique that provides the most significant boost to DL model's generalization power in recent memory.

The other category of approach that is working well is program synthesis -- if pushed to the extent that it could solve ARC-AGI, the same system could be redeployed to solve arbitrary programming tasks, as well as tasks isomorphic to programming (such as theorem proving).


"However, the idea is that the architecture of that model should be able to be repurposed to arbitrary problems"

From a mathematical perspective, this doesn't sound right. All NNs are universal apprxomators and in theory can all learn the same thing to equal ability. It's more about the learning algorithm than the architecture IMO.


François, have you coded and tested a solution yourself that you think will work best?


Hey, he's the visionary. You come up with the nuts and bolts.


is keras nuts and bolts enough?


Keres is a good abstraction model but poorly implemented.


Yes to both.


Actually, `keras.distribution` is straightforward to implement in TF DTensor and with the experimental PyTorch SPMD API. We haven't done it yet first because these APIs are experimental (only JAX is mature) and second because all the demand for large-model distribution at Google was towards the JAX backend.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: