Hacker News new | past | comments | ask | show | jobs | submit login

Here are some benchmarks, excellent to see that an open model is approaching (and in some areas surpassing) GPT-3.5!

AI2 Reasoning Challenge (25-shot) - a set of grade-school science questions.

- Llama 1 (llama-65b): 57.6

- LLama 2 (llama-2-70b-chat-hf): 64.6

- GPT-3.5: 85.2

- GPT-4: 96.3

HellaSwag (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.

- Llama 1: 84.3

- LLama 2: 85.9

- GPT-3.5: 85.3

- GPT-4: 95.3

MMLU (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.

- Llama 1: 63.4

- LLama 2: 63.9

- GPT-3.5: 70.0

- GPT-4: 86.4

TruthfulQA (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online. Note: TruthfulQA in the Harness is actually a minima a 6-shots task, as it is prepended by 6 examples systematically, even when launched using 0 for the number of few-shot examples.

- Llama 1: 43.0

- LLama 2: 52.8

- GPT-3.5: 47.0

- GPT-4: 59.0

[0] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderb... [1] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderb...




Is it possible that some LLM’s are trained on these benchmarks? Which would mean they’re overfitting and are incorrectly ranked? Or am I misunderstanding these benchmarks?…



Having worked on ML products, there is sometimes debate on whether you should train on the test partition prior to prod deployment - after all, why would you ship a worse model to prod? Obviously you can't tell whether the model is better at generalization compared to an alternate technique, and you also incur some overfit risk. But many industrial problems are solvable through memorization.


> after all, why would you ship a worse model to prod?

...because you need a control to evaluate how well your product is doing? I know it's a young field, but boy, do some folk love removing the "science" from "data science"


You can evaluate a version of the model that has been trained on one set of data, and ship to production a different model that has been trained on the complete set of data. In many cases one can reasonably infer that the model which has seen all of the data will be better than the model which has seen only some of the data.

I'm not claiming that's what happened here, nor am I interested in nitpicking "what counts as 'science'". I'm just saying this is a reasonable thing to do.


This is possible if you use e.g. train 1000 models on different subsets of data and verify that each and every one of them is performing well. In that case, you can reasonably infer that another model trained on all data would work well, too.

But this is, of course, 1000 times more expensive to do. And if you only train 100, or 10, or 1 model, then the deduction becomes increasingly unstable.

So from a practical point of view, it's probably not feasible, because you would put those resources into something else instead that has more ROI.


I have personally never seen a situation where more training data (of similar quality) causes the model to perform worse. Have you seen such a situation? Please provide example.

Your suggestion of running 1000 training runs with different subsets of data sounds excessive and unnecessary to me.


You have to know when to stop training. How are you going to do that without a test set? How do you know when you have achieved generalization without over-fitting?


Early stopping is just one way of regularization. You can use L2 or dropout and then you can train until your model converges.


Usually I develop models with a train/validation/test split, where I'm measuring results on the validation set to decide the appropriate number of epochs to use. Then I burn the test set to evaluate performance. Then I train from scratch on the entire dataset (no split) and I use the same number of epochs to train here. Is this number of epochs optimal when the dataset is different? Of course not. But when you use regularization and other methods to combat overfitting appropriately, your training is not going to be overly sensitive to changes in epoch number anyway.


In the case of fine tuning, you can end up with catastrophic forgetting. Architecture can influence how data scales, and adding data doesn’t always improve performance


>infer that the model which has seen all of the data will be better than the model which has seen only some of the data.

It really depends upon the data. A smaller set of data that mostly consists of "truth" might be better than a larger dataset that also has many "lies".

Perhaps what you mean is that the model might be more representative, rather than _better_.


There are offline metrics and online metrics. Offline metrics might be something like AUROC on a test set. Once you’ve pushed the model online, you can check the online metrics. Ultimately the online metrics are more important, that’s the whole reason the model exists in the first place.

Your control in an online environment is the current baseline. You don’t need to save the test set anymore, you can push it online and test it directly.


Why would you want to ship an untested model? That's insane.


This is a common approach, for example, in data science competitions. Why? Well, if you want to maximize the model's abilities, this is what you have to do. (Not saying Llama 2 is released like this; it probably isn't)


Yeah but in competitions there's a secret test set used to evaluate the model.


I have personally shipped "untested" models in production in situations where a "secret test set" does not exist. (Train on subset of data -> evaluate on different subset of data -> train again on entire dataset).

I do not consider myself to be insane.


I didn't mean to insult anyone. The idea of not knowing the actual performance of the model just intuitively seems to me like it's a bit of a gamble. I have only trained models in a scientific context before, where this was never an option.


Here's another way to look at it. The test set is an approximation for how the model will perform against production data, but the actual performance of the model is how it performs for actual end-users. So real _actual_ results are always unknown util after the fact. Given that, if the metrics from training clearly show that more data == better model, and there's no reason to expect that trend to reverse, then the logical thing to do is maximise the data used for training to get the best results for actual production data.

Doing this does complicate decisions for releasing subsequent model updates, as the production model can't be directly compared against new iterations any more. Instead a pre-production model would need to be used, that has not seen the test set. However, if data drift is likely, then re-using the old test set wouldn't be useful anyway.


Another way of thinking about it. If training on all the data yields a model which is functionally 5% better in online metrics, which would not be uncommon in a pareto distributed traffic pattern - then any subsequent partitioned model would likely perform worse than the prod model.

More complication arises when users expect that things which worked previously in one way - continue working in this way. Users don't really care that their traffic was in the test set. In an even more extreme case, many industrial problems have a high correlation between the traffic today and the traffic next week, An optimal solution for such a situation would be to complete a full memorization today's traffic and use that for next week. In many cases, an overfit model can effectively perform this memorization task with fewer parameters/infrastructure than an actual dictionary lookup.


You act like training is this pre-set process you just "do". That's not the case, you train until you reach desired performance on the test set. If you don't have a test set how do you know when to stop training and avoid overfitting?


You're confusing training epochs with dataset size.

I'm simplifying now, but you can think of epochs as "how many times we train over the entire dataset? 1 time? 10 times?"

Correspondingly, you can think of dataset size as "how many Wikipedia pages we include in the dataset? 1 million? 10 million?"

Now let's think about overfitting.

What happens when you increase epochs is the model is more likely to overfit your data.

What happens when you increase dataset size is the model is less likely to overfit your data.



Unfortunately, Goodhart's law applies on most kind of tests

> Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.


This is SAT-prep in a nutshell. :)


Test leakage is not impossible for some benchmarks. But researchers try to avoid/mitigate that as much as possible for obvious reasons.


Given all of the times OpenAI has trained on peoples' examples of "bad" prompts, I am sure they are fine-tuning on these benchmarks. It's the natural thing to do if you are trying to position yourself as the "most accurate" AI.


Assuming they were doing that, Fine-tuning on benchmarks isn't the same as test leakage/testing on training data. No researcher is intentionally training on test data.

If it performs about as well in instances it has never seen before (test set) then it's not overfit to the test.


I'm confused, fine-tuning is training. How is that not leakage? I'm hesitant to call them researchers, they are employees of a for-profit company trying to meet investor expectations.


1.You train on the kind of problems you want to solve. you don't report numbers that evaluate performance based on examples it trained on. Datasets will typically have splits, one for training and another for testing.

2. Open ai is capped profit. They are also not a publicly traded company. researchers are researchers regardless of who they work for. Training on test data is especially stupid for commercial applications because customers find that out quick and any reputation is gone.


I am suggesting that OpenAI's main product is "LLM that benchmarks the best." From that point, it is completely illogical not to train on at least some of the test data (or data that is very similar to the test data) so that you can fudge the numbers in your favor. You don't want to go too far, but overfitting a tiny bit will make you look like you have a significant edge. When someone says that your product isn't that good, you then point to the benchmarks and say, "objective measures say that you are wrong." This is a tried and true marketing technique.

Hardware companies, which live and die on benchmarks, do this all the time. Meanwhile, it does appear that OpenAI is underperforming consumer expectations, and losing users quite quickly at this point, despite doing incredibly well on benchmarks.

Also, this isn't about profit. It's about market cap and it's about prestige. Those are not correlated to profit.


Yeah and I'm saying I don't believe it.

I don't know what you're talking about. GPT-4 is the best model out there by significant margin. That's coming from personal usage not benchmarks. A 10% drop in traffic the first month students are out of school is not "losing users quickly" lol.

ChatGPT didn't gain public use waving benchmarks around. We didn't even know what they were until GPT-4's release. The vast majority of its users know nothing about any of that or care. So your first sentence is just kind of nonsensical.

Anyway whatever. If that's what you believe then that's what you believe. Just realize you have nothing to back it up.


Nobody has any evidence here. I'm saying that the incentives are such that the null hypothesis should be the opposite of what you think.


Your entire argument, Your incentives hinge on "OpenAI's main product is "LLM that benchmarks the best."" which is a particularly silly assertion when Open AI did not release benchmark evaluatios for 3.5 for months. Not when the product was released. Not even when the API was released.


You don't have to release official numbers to run benchmarks. You also don't have to own the LLM to run benchmarks. Within hours of GPT-4's emergence, many benchmarks had been run.


You said their main product was "LLMs that benchmark the best" like benchmarking was some important aspect of marketing. It's not. That's fact. You can't say it's this hugely important thing and conveniently leave out they make near zero effort to do anything with it.

Basically the only people running benchmarks that could have been gamed on GPT-4 were other researchers, not companies, customers or users looking to use a product.

Normal users are certainly not running benchmarks and companies running benchmarks are running ones on internal data, which just defeats the whole point of gaming these research benchmarks.


Besides, OpenAI dropped all pretense of being open and transparent as soon as they saw how popular their open and transparent technology had become.


“No researcher is intentionally training on test data.”

Citation Needed.


[flagged]


I am suggesting that it is only logical for a company whose main advertising comes from good benchmark numbers to play games with the benchmarks. In this case, I am suggesting that they run a fine-tuning/RL pass using benchmark scores as an objective function or using a training set that otherwise looks a lot like the benchmarks. Every single other company whose marketing depends on benchmarks does the analogue of this to some degree.

And we won't know for sure that they aren't doing this until they publicly disclose details about their model and training process (like every other research org does), allowing other researchers to run replication studies.

Also, I don't appreciate the ad hominems. Comments about some unrelated "conspiracy theorist" and "vaccine discourse" add nothing to the discussion.



that’s why OpenAI didn’t release any details on GPT4 training data blend ;)


It would be a bit of a scandal, and IMO too much hassle to sneak in. These models are trained on massive amounts of text - specifically anticipating which metrics people will care about and generating synthetic data just for them seems extra.

But not an expert or OP!


I don't think it's a scandal, it's a natural thing that happens when iterating on models. OP doesn't mean they literally train on those tests, but that as a meta-consequence of using those tests as benchmarks, you will adjust the model and hyperparameters in ways that perform better on those tests.

For a particular model you try to minimally do this by separating a test and validation set, but on a meta-meta level, it's easy to see it happening.


You don't see an engineer at an extremely PR-conscious company at least checking how their model performs on popular benchmarks before rolling it out? And if its performance is lackluster, you do you really see them doing nothing about it? It probably doesn't make a huge difference anyway. I know those old vision models were overfitted to the standard image library benchmarks, but they were still very impressive.


Famously, some of the image models were so overtrained they could still yield impressive results if the colors were removed.


This wasn't so much overtraining, as the models learning something different than what we expected. If you look at a pixel by pixel representation of an image, textures tend to be more significant/unique patterns than shapes. There are some funny studies from the mid 2010s exploring this.


How would it even be possible to verify that?


"Verify", that's quite a demand;

"corroborate", you find queries of the same level which would give satisfactory output upon good performance but fail in a faulty overfitted model.


Good to see these results, thanks for posting. I wonder if GPT-4's dominance is due to some secret sauce or if its just the first mover advantage and Llama will be there soon.


In chatgpt there is plenty of "secret sauce" in their output sampling, sending the output for scoring by another model.

As for Gpt4, allegedly it is a combined model(many domain specific models) so perhaps add extra input processing by yet another model to detect problem domain and send it to the right specialised model.


It's just scale. But scale that comes with more than an order of magnitude more expense than the Llama models. I don't see anyone training such a model and releasing it for free anytime soon


I thought it was revealed to be fundamentally ensemblamatic in a way the others weren’t? Using “experts” I think? Seems like it would meet the bar for “secret sauce” to me


Sparse MoE models are neither new nor secret. The only reason you haven't seen much use of them for LLMs is because they would typically well underperform their dense counterparts.

Until this paper (https://arxiv.org/abs/2305.14705) indicated they apparently benefit far more from Instruct tuning than dense models, it was mostly a "good on paper" kind of thing.

In the paper, you can see the underperformance i'm talking about.

Flan-Moe-32b(259b total) scores 25.5% on MMLU pre Instruct tuning and 65.4 after.

Flan 62b scores 55% before Instruct tuning and 59% after.


This paper came out well after GPT-4, so apparently this was indeed a secret before then.


The user I was replying to was talking about the now and future.

We also have no indication sparse models outperform dense counterparts so it's scale either way.


Is there a difference here between a secret and an unknown? It may well be that some researcher / comp engineer had an idea, tried it out, realized it was incredibly powerful, implemented it for real this time and then published findings after they were sure of it?

I'm more of a mechanical engineering adjacent professional than a programmer and only follow AI developments loosely


The quoted paper yes, but the MoE concept and layers and training is old.

Published as a conference paper at ICLR 2017

OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER

Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton and Jeff Dean


GPT4 is rumored to have 1.7T parameters, Llama 2 70B.


230x8 MoE.


I have to say in my experience falcon-40b-instruct got very close to chatgpt (gpt-3. 5),even surpassing it in few domains. However, it is important to note (not at all)OpenAI are doing tricks with the model output. So comparing OS models with just greedy output decoding (very simple) is not fair for OS models.

Still, I'm very excited this model at 13B seems to be matching falcon-40B in some benchmarks. I'm looking forward to using it :-)


> OpenAI are doing tricks with the model output

Do you have any pointers to the “tricks” that are being applied?


Sounds like a reference to Mixture of Experts


could be something like prompt rewriting or chain of thought or reflexion going on in the background as well


When were the GPT-4 benchmarks calculated, on original release or more recently? (curious per the debate about alleged gpt-4 nerfing)


They're based on the original technical report.

"Refuel" has run a different set of benchmarks on GPT-3.5 and GPT-4 and found a decline in quality.

https://www.refuel.ai/blog-posts/gpt-3-5-turbo-model-compari...


Plenty of the complaints/accusations predate the release of the 0613 set of models.

To be clear, I have trouble with the theory as I have not yet seen evidence of "nerfing". What you provided is actually the _only_ evidence I've seen that suggests degradation - but in this case OpenAI is being completely transparent about it and allows you to switch to the 0314 model if you would like to.

Every complaint I have seen has been highly anecdotal, lacking any rigor, and I bet are explained by prolonged usage resulting in noticing more errors. Also probably a bit of "the magic is gone now" psychological effect (like how a "cutting edge" video game such as Half-Life 2 feels a bit lackluster these days).


Could it be the case that many of these benchmarks are just learning this material included in their parameters?


How they compare the exact value returned in a response? I found that returning a stable json format is something unpredictable or it reply in a different language.


Your Llama2 MMLU figure is wrong


Looks like he copied it from https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderb...

I see different figures in different places, no idea what's right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: