The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.
Hi! Draw Things should be able to add support in the next 2 weeks after we get video feature a little bit more polished out with existing video models (Wan 2.1, Hunyuan etc).
OP seems to be making tons of these "fan" pages for AI tools according to his HN submission history. It's also the same design every time. Smells fishy.
> Disclaimer: This is a fan-made website created by AI enthusiasts. We are not affiliated with, endorsed by, or connected to manus.im. This website is an independent project and operates separately from the official Agenttars
Loads of sites like this submitted, what's is the motivation I wonder?
Every browser I tried on my Mac does not show any of the videos. You only see the top animation.
Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation
There are a couple of JS errors, which I presume keep the videos from appearing.
That's the least of the problems with how they've optimized their assets, there's about 250MB of animated GIFs on the Huggingface page (actual 1989 vintage GIFs, not modern videos pretending to be GIFs). AI people just can't get enough of wasting bandwidth apparently, at least this time it's another AI company footing the bill for all the expensive AWS egress they're burning through for no reason.
It says `Coming Soon` for the `inference.py` for the quantized version. Does anyone happen to know how to modify the non-quantized version [0] to work?
That isn't really what being behind implies. We've known how to multiply matrices since ... at least the 70s. And video processing isn't a wild new task for our friends at AMD. I'd expect that this would run on an AMD card.
But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.
I have a 9070 XT... rockm ATM is unoptimized for it and the generation speed is less than what it should be if AMD isn't fudging the specs. Also the memory management is dire / buggy and will cause random OOMs on one ruin then be fine the next. splitting workflow helps so you can have one OOM crash in between. VAEs also crash from OOM. This is all just software issues because vram isn't released properly on AMD.
Sometimes it is a little more work to get stuff setup, but it works fine I've run plenty of models on my 7900 XTX wan2.1 14B, flux 1.dev and whisper. (wan and flux were with comfyui and whisper with whisper.cpp)
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.
UPDATE: This is text on an unofficial website unaffiliated with the project. BUT https://www.lightricks.com/ has "LTXV open source video model" in a big header at the top of my page so my complaint still stands, even though the FAQ copy I'm critiquing here is likely not the fault of Lightricks themselves.
So it's open weights, not open source.
Open weights is great! No need to use the wrong term for it.
- Section 2: entities with annual revenues of at least $10,000,000 (the “Commercial
Entities”) are eligible to obtain a paid commercial use license, subject to the terms and provisions of a
different license (the “Commercial Use Agreement”)
- Section 6: To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this Agreement, update the Model through electronic means, or modify the Output of the Model based on updates
This is an easy fix: change that FAQ entry to:
> Is LTXV-13B open weights?
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model is open weights and the underlying code is open source (Apache 2.0), allowing for community development and customization.
Are weights even copyrightable? I'm not sure what these licenses do, other than placate corporate legal or pretend to have some kind of open source equivalent for AI stuff.
Depends on how they're made. If they're fully automated and copyrights do not transfer from training data to trained weights (which is what everyone assumes at the moment) then they're the same as any machine output: not copyrightable, just like AI output isn't copyrightable.
However if there is any active human involvement during training, one could claim that this makes it human work so they're copyrightable. For example not too long ago i wrote a simple upscaler for gamescope when i was learning how to implement neural networks and i did it in a somewhat "manual" manner by running the training for a bit, testing output, modifying a bit the code, adding/changing training data, then picking up from where the training stopped and continuing from there, etc, so one could claim that the weights i ended up with are the result of my own creative process (though TBH i wouldn't nor i am comfortable with the idea myself since we're talking about a few hundred numbers).
Will have to test this out and it looks like it runs on consumer hardware which is cool. I tried making a movie[1] with LTXV several months ago and had a good time but 30x faster generations sounds necessary.
> Hi , i'm using default image to video workflow with default settings and i'm getting pixalated image to video output full of squares , how to fix this ?
The model supports both multi-scale rendering and autoregressive generation. With multi-scale rendering, you can generate a low-resolution preview of 200-300 frames and then upscale to higher resolutions (with or without tiling).
The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:
AI video seems like it needs to be outlawed man. I just don’t see how the marginal value (irrelevant) it creates for people like advertisers could ever outweigh the huge downside and risk it comes with for society at large.
Seems a bit unfair (or maybe just ill-informed?) to lump this in with the confusing mess that is model naming at OpenAI.
The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.
The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.
It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.
>It’s likely their is a higher parameter count model in the works and this makes it easy to distinguish between the two.
in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)
re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.
If you’re looking for the official LTXV model and working ComfyUI flows, make sure to visit the right sources:
- Official site: https://www.lightricks.com
- Model + Playground: https://huggingface.co/Lightricks/LTX-Video
The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.
reply