The report cites both GPT-3.5 and GPT-4 scores on page 7 [1]. I've checked the numbers and they compare FreeWilly2 to GPT-3.5. For example, HellaSwag score of 85.5% corresponds to GPT-3.5.
There are no actual footnote marks that connect any statements in the post to the footnotes, so there are no specific claims referenced. But if you read the actual text of the page, they say it compares favorably to 3.5 for some tasks. Which means it falls short of 3.5 for the rest and GPT-4 for all of them, or else they surely would have mentioned that as well.
It is. I think people are perhaps leaping to the usual discussions when models are called "open source" but here it seems they've called it free. Wasn't the whole foundational part of the open source discussion about distinguishing open and free?
Edit - someone will no doubt bring up "open access" as a term. This is a common term for academic work and the license here easily meets the criteria usually applied. Open access is not the same as open source.
I'm out of context, but shouldn't it be possible to train a LLM-like model for images? (as an alternative to the stable diffusion process)
If you rearrenge all pixels from square-sized images using the Hilbert curve, you should end-up with pixels arranged in 1D, and that shouldn't be much different from "word tokens" that LLMs are used to deal with, right? Like a LLM that only "talks" in pixels.
This would have the benefit that you may be able to use various resolutions during training with the model still "converging" (since the Hilbert curve stabilizes towards infinite resolution).
I'm not sure if the pixels would also need to be linearized, then maybe it could work to represent the RGB values as a 3D cube and also apply a 3D Hilbert curve on it, then you would have a 1D representation of all of the colors.
I don't really know the subject but I guess something like that should be possible.
No need for a Hilbert curve, you can just flatten pixels the usual way (ie X = img.reshape(-1)). The main issue is that attention doesn’t scale that well, and with a 512x512 img the attended region is now 262k tokens, which is a lot. The other issue is that you’d throw away data linearizing colors (why not keep them 3-dimensional?).
The corresponding work you’re looking for is Vision Transformers (ViT) - they work well, but not as great as LLMs, I think, for generation. Also I think people like that diffusion models are comparatively small and expensive - they’d rather wait than OOM.
> Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning.
I notice that the repo hasn’t been updated since April, and a question asking for an update has been ignored for at least a month: https://github.com/Stability-AI/StableLM/issues/83