Hacker News new | past | comments | ask | show | jobs | submit login
Mistral releases Pixtral 12B, its first multimodal model (techcrunch.com)
163 points by jerbear4328 21 days ago | hide | past | favorite | 40 comments



The "Mistral Pixtral multimodal model" really rolls off the tongue.

> It’s unclear which image data Mistral might have used to develop Pixtral 12B.

The days of free web scraping especially for the richer sources of material are almost gone, with anything between technical (API restrictions) and legal (copyright) measures building deep moats. I also wonder what they trained it on. They're not Meta or Google with endless supplies of user content, or exclusive contracts with the Reddits of the internet.


What do you mean by copyright measures? Has anything changed on that front in the last two years?

My hunch is that most AI labs are already sitting on a pretty sizable collection of scraped image data - and that data from two years ago will be almost as effective as data scraped today, at least as far as image training goes.


The issue with image models is that their style becomes identifiable and stale quite quickly, so you’ll need a fresh intake of different, newer, styles every so often and that’s going to be harder and harder to get.


The style becoming identifiable and stale has mostly to do with CFG and almost nothing with the dataset, the heavy use of CFG by most models trades diversity with coherency. You don't need a costant intake of new images and styles, it's like saying that an image created two years ago is stale because it doesn't follow a new style or something.

Also Pixtral is not a text-to-image model.


There is the problem of literal style though. The aesthetics of say clothes do evolve overtime, not year to year big changes, but every 3-5? Sure. Just laughing at the thought of the model where any image generated is say stuck in 1990s grunge attire.


CFG for Classifier-Free Guidance?


Exactly, https://arxiv.org/abs/2207.12598

Jonathan Ho, one of the authors of the CFG paper, now works for Ideogram, and Ideogram 2 is one of the very few models (or perhaps the only one) where I don't see the artifacts caused by the CFG, maybe he has achieved a breakthrough.


> Built on one of Mistral’s text models, Nemo 12B, the new model can answer questions about an arbitrary number of images of an arbitrary size given either URLs or images encoded using base64, the binary-to-text encoding scheme. Similar to other multimodal models such as Anthropic’s Claude family and OpenAI’s GPT-4o, Pixtral 12B should — at least in theory — be able to perform tasks like captioning images and counting the number of objects in a photo.

This is a not a diffusion model -- it doesn't create images, it answers questions.


Train LoRas for models that can take them


The issue is getting the data on newer aesthetic styles.

The more and more platforms lock down access to their data, the harder it’ll be for models to stay up to date on art trends.

We just haven’t had image gen around long enough to witness a major style change like the skeuomorphic iPhone icons of old to the new modern flat ones.


solvable without additional images


It’s literally not.

If an artist born today develops their own style that takes the world by storm in 20years, the image generators of the time (for this thought experiment, imagine we’re using the same image gen techniques as today) would not know about it. They wouldn’t be able to replicate it until they get enough training data on that style.


At what point does an agent sitting at a browser collecting information differ from a human?

I have multiple ad-blockers running, how am I different from a bot scouring the “free” web? I get the idea of copyright and creators wanting to be paid for their content. However, I think there are plenty of human users out there not “paying” for “free” content either. Which one is a greater loss of revenue? A collection of over a million humans? Or 100 or so corporate bots?


Humans use Google Chrome from their home IP address that isn't on any blacklists, and they're always happy to make an account and download an app instead of accessing a website. Or at least that's what companies think humans are


>The days of free web scraping especially for the richer sources of material are almost gone

I would say the opposite, it has never been easier to collect a huge amount of data, in particular if you have a target, also you don't even need to write a line of code if you are good at explaining Claude 3.5 Sonnet what you want to achieve and the details.


You don't need a contract with reddit to scrape it, you can just add `.json` to any url and you'll get the entire thread as one object.


They have very heavy rate limits on their 1st party api now. I can't even delete my own content, nevermind scrape.


well, it's called "reddit" not "modify-via-API-it" :-)


there are torrents all over the internet of AI training data for images and video....

img2dataset also exists


Couple notes for newcomers:

1. This is a VLM, not a text-to-image model. You can give it images, and it can understand them. It doesn't generate images back.

2. It seems like Pixtral 12B benchmarks significantly below Qwen2-VL-7B [1], so if you want the best local model for understanding images, probably use Qwen2. If you want a large open-source model, Qwen2-VL-72B is most likely the best option.

1: https://qwenlm.github.io/blog/qwen2-vl/


>If you want a large open-source model, Qwen2-VL-72B is most likely the best option.

Only the 2&7B have been "open sourced". From your link:

>We opensource Qwen2-VL-2B and Qwen2-VL-7B with Apache 2.0 license, and we release the API of Qwen2-VL-72B!


Mistral being more open than 'openai' is kind of a meme. How can a company call itself open while it refuses to openly distribute it's product and when competitor are actually doing it.


Meta too. Openai is an ironic name now


Related earlier:

New Mistral AI Weights

https://news.ycombinator.com/item?id=41508695


I’d love to know how much money Mistral is taking in versus spending. I’m very happy for all these open weights models, but they don’t have Instagram to help pay for it. These models are expensive to build.


No license with this one yet, though you can probably assume it's Apache like the others.


The article says they confirmed it's Apache via email


A question for sd lora trainers, is this usable for making captions and what are you using, apart from BLIP?

Also, can your model of choice understand your requests to include/omit particular nuances of an image?


I like Qwen2-VL 7B because it outputs shorter captions with less fluff. But if you need to do anything advanced that relies on reasoning and instruction following the model completely falls flat on it's face.

For example, I have a couple way-too-wordy captions made with another captioner, which I'd like to cut down to the essentials while correcting any mistakes. Qwen2 is completely ignoring images with this approach, and decides to only focus on the given caption, which makes it unable to even remotely fix issues in said caption.

I am really hoping Pixtral will be better for instruction following. But I haven't been able to run it because they didn't prioritize transformers support, which in turn has hindered the release of any quantized versions to make it fit on consumer hardware.


I’m no expert but Florence2 has been my go-to. It’s pretty great at picking up art styles and IP stuff - “The image depicts Goku from the anime series Dragonball Z…”

I don’t believe you can really prompt it though, but the other models where I could also didn’t work well on that front anyways.

TagGui is an easy way to try out a bunch of models.


Yeah, blip mostly ignores prompt too. I tried to disassemble it and feed my prompts, to no avail. Although I found that default kohya gui arguments are not even remotely the best. Here's my args:

  finetune/make_captions.py ... \
    --num_beams=12 \
    --top_p=0.9 \
    --max_length=75 \
    --min_length=24 \
    --beam_search \
    ...
With this, it's very often that I just take its caption as is, or add little.

TagGui

Oh, interesting, thanks!


Could this be used for a selfhosted handwritten text recognition instance?

Like writing on an ePaper tablet, exporting the PDF and feed this into this model to extract todos from notes for example.

Or what would be the SotA for this application?


> the 12-billion-parameter model is about 24GB in size

Probably not on the device itself but I would love that use case as well. At least going to my own server. I’d want to protect notes in particular, which is why I don’t do any cloud backup on my RM2. But some self hosted, AI assisted OCR workflows could be really nice.



if you have a 3090, you could self host


12B is pretty small, so I’m doubting it’ll be anywhere close to internvl2 however mistral does great work and likely this model is still useful for on device tasks


It appears to be slightly worse than Qwen2VL 7B, a model almost half it's size, if you look at the Qwen's official benchmarks instead of Mistral's.

https://xcancel.com/_philschmid/status/1833954941624615151


But Qwen is not multimodal, or is it?


https://qwen2.org/vl/

>Qwen2-VL is the latest addition to the vision-language models in the Qwen series, building upon the capabilities of Qwen-VL. Compared to its predecessor, Qwen2-VL offers:

>State-of-the-Art Image Understanding

>Extended Video Comprehension

Besides, it'd have been pretty silly for them to mention it on their slides if it wasn't.


I've found llama 3.1 8B to be effective at transforming unstructured text into structured data, now that LM Studio accepts a json schema parameter.

For a general knowledge chatbot it doesn't know much of course, but its a good worker bee.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: