The "Mistral Pixtral multimodal model" really rolls off the tongue.
> It’s unclear which image data Mistral might have used to develop Pixtral 12B.
The days of free web scraping especially for the richer sources of material are almost gone, with anything between technical (API restrictions) and legal (copyright) measures building deep moats. I also wonder what they trained it on. They're not Meta or Google with endless supplies of user content, or exclusive contracts with the Reddits of the internet.
What do you mean by copyright measures? Has anything changed on that front in the last two years?
My hunch is that most AI labs are already sitting on a pretty sizable collection of scraped image data - and that data from two years ago will be almost as effective as data scraped today, at least as far as image training goes.
The issue with image models is that their style becomes identifiable and stale quite quickly, so you’ll need a fresh intake of different, newer, styles every so often and that’s going to be harder and harder to get.
The style becoming identifiable and stale has mostly to do with CFG and almost nothing with the dataset, the heavy use of CFG by most models trades diversity with coherency. You don't need a costant intake of new images and styles, it's like saying that an image created two years ago is stale because it doesn't follow a new style or something.
There is the problem of literal style though. The aesthetics of say clothes do evolve overtime, not year to year big changes, but every 3-5? Sure. Just laughing at the thought of the model where any image generated is say stuck in 1990s grunge attire.
Jonathan Ho, one of the authors of the CFG paper, now works for Ideogram, and Ideogram 2 is one of the very few models (or perhaps the only one) where I don't see the artifacts caused by the CFG, maybe he has achieved a breakthrough.
> Built on one of Mistral’s text models, Nemo 12B, the new model can answer questions about an arbitrary number of images of an arbitrary size given either URLs or images encoded using base64, the binary-to-text encoding scheme. Similar to other multimodal models such as Anthropic’s Claude family and OpenAI’s GPT-4o, Pixtral 12B should — at least in theory — be able to perform tasks like captioning images and counting the number of objects in a photo.
This is a not a diffusion model -- it doesn't create images, it answers questions.
The issue is getting the data on newer aesthetic styles.
The more and more platforms lock down access to their data, the harder it’ll be for models to stay up to date on art trends.
We just haven’t had image gen around long enough to witness a major style change like the skeuomorphic iPhone icons of old to the new modern flat ones.
If an artist born today develops their own style that takes the world by storm in 20years, the image generators of the time (for this thought experiment, imagine we’re using the same image gen techniques as today) would not know about it. They wouldn’t be able to replicate it until they get enough training data on that style.
At what point does an agent sitting at a browser collecting information differ from a human?
I have multiple ad-blockers running, how am I different from a bot scouring the “free” web? I get the idea of copyright and creators wanting to be paid for their content. However, I think there are plenty of human users out there not “paying” for “free” content either. Which one is a greater loss of revenue? A collection of over a million humans? Or 100 or so corporate bots?
Humans use Google Chrome from their home IP address that isn't on any blacklists, and they're always happy to make an account and download an app instead of accessing a website. Or at least that's what companies think humans are
>The days of free web scraping especially for the richer sources of material are almost gone
I would say the opposite, it has never been easier to collect a huge amount of data, in particular if you have a target, also you don't even need to write a line of code if you are good at explaining Claude 3.5 Sonnet what you want to achieve and the details.
1. This is a VLM, not a text-to-image model. You can give it images, and it can understand them. It doesn't generate images back.
2. It seems like Pixtral 12B benchmarks significantly below Qwen2-VL-7B [1], so if you want the best local model for understanding images, probably use Qwen2. If you want a large open-source model, Qwen2-VL-72B is most likely the best option.
Mistral being more open than 'openai' is kind of a meme. How can a company call itself open while it refuses to openly distribute it's product and when competitor are actually doing it.
I’d love to know how much money Mistral is taking in versus spending. I’m very happy for all these open weights models, but they don’t have Instagram to help pay for it. These models are expensive to build.
I like Qwen2-VL 7B because it outputs shorter captions with less fluff. But if you need to do anything advanced that relies on reasoning and instruction following the model completely falls flat on it's face.
For example, I have a couple way-too-wordy captions made with another captioner, which I'd like to cut down to the essentials while correcting any mistakes. Qwen2 is completely ignoring images with this approach, and decides to only focus on the given caption, which makes it unable to even remotely fix issues in said caption.
I am really hoping Pixtral will be better for instruction following. But I haven't been able to run it because they didn't prioritize transformers support, which in turn has hindered the release of any quantized versions to make it fit on consumer hardware.
I’m no expert but Florence2 has been my go-to. It’s pretty great at picking up art styles and IP stuff - “The image depicts Goku from the anime series Dragonball Z…”
I don’t believe you can really prompt it though, but the other models where I could also didn’t work well on that front anyways.
TagGui is an easy way to try out a bunch of models.
Yeah, blip mostly ignores prompt too. I tried to disassemble it and feed my prompts, to no avail. Although I found that default kohya gui arguments are not even remotely the best. Here's my args:
> the 12-billion-parameter model is about 24GB in size
Probably not on the device itself but I would love that use case as well. At least going to my own server. I’d want to protect notes in particular, which is why I don’t do any cloud backup on my RM2. But some self hosted, AI assisted OCR workflows could be really nice.
12B is pretty small, so I’m doubting it’ll be anywhere close to internvl2 however mistral does great work and likely this model is still useful for on device tasks
>Qwen2-VL is the latest addition to the vision-language models in the Qwen series, building upon the capabilities of Qwen-VL. Compared to its predecessor, Qwen2-VL offers:
>State-of-the-Art Image Understanding
>Extended Video Comprehension
Besides, it'd have been pretty silly for them to mention it on their slides if it wasn't.
> It’s unclear which image data Mistral might have used to develop Pixtral 12B.
The days of free web scraping especially for the richer sources of material are almost gone, with anything between technical (API restrictions) and legal (copyright) measures building deep moats. I also wonder what they trained it on. They're not Meta or Google with endless supplies of user content, or exclusive contracts with the Reddits of the internet.