Hey everyone,
ML Blocks is a node-based workflow builder to create multi-modal AI workflows without writing any code.
You connect blocks that call various visual models like GPT4v, Segment Anything, Dino etc. along with basic image processing blocks like resize, invert color, blur, crop, and several others.
The idea is to make it easier to deploy multi-step image processing workflows, without needing to spin up endless custom OpenCV cloud functions to glue together AI models. Usually, even if you're using cloud inference servers like Replicate, you still need to write your own image processing code to pre and post-process images in your pipeline. When you're trying to move fast, that's just unnecessary overhead.
With ML Blocks, you can build a workflow and deploy the whole thing as a single API. AFAIK, ML Blocks is the only end-to-end workflow builder built specifically for image processing.
If you're curious, our models run on Replicate, HuggingFace & Modal Labs cloud GPUs and we use React Flow for the node UX.
However soon creating a "shader that works" was no longer an issue but how to create X effect using shaders was my next blocker, and luckily there were ton of YouTube tutorials on these, which was very helpful, but this continues to be a pain point even now
Since now we are in the age of AI, would it be possible to prompt something like "create me a workflow to take image A, a concept art of a character and convert into into a walking animation sprite sheet with 16 frames for each animation walking up, down, left, right and all diagonal directions" and have it not only generate the result, but a workflow to create the result so it can be edited and tweaked.