Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ML Blocks – Deploy multimodal AI workflows without code (mlblocks.com)
112 points by neilxm 10 months ago | hide | past | favorite | 32 comments
Hey everyone,

ML Blocks is a node-based workflow builder to create multi-modal AI workflows without writing any code.

You connect blocks that call various visual models like GPT4v, Segment Anything, Dino etc. along with basic image processing blocks like resize, invert color, blur, crop, and several others.

The idea is to make it easier to deploy multi-step image processing workflows, without needing to spin up endless custom OpenCV cloud functions to glue together AI models. Usually, even if you're using cloud inference servers like Replicate, you still need to write your own image processing code to pre and post-process images in your pipeline. When you're trying to move fast, that's just unnecessary overhead.

With ML Blocks, you can build a workflow and deploy the whole thing as a single API. AFAIK, ML Blocks is the only end-to-end workflow builder built specifically for image processing.

If you're curious, our models run on Replicate, HuggingFace & Modal Labs cloud GPUs and we use React Flow for the node UX.




I like the idea of node based image editing. It reminds me of when learning how to write shader code for games and GLSL was waaaay too over my head until I discovered node based shaders, and I didn't have to wrangle with coding and instead focusing experimenting with different nodes.

However soon creating a "shader that works" was no longer an issue but how to create X effect using shaders was my next blocker, and luckily there were ton of YouTube tutorials on these, which was very helpful, but this continues to be a pain point even now

Since now we are in the age of AI, would it be possible to prompt something like "create me a workflow to take image A, a concept art of a character and convert into into a walking animation sprite sheet with 16 frames for each animation walking up, down, left, right and all diagonal directions" and have it not only generate the result, but a workflow to create the result so it can be edited and tweaked.


Oh yea I know what you mean. There are several parallels here with shader nodes for sure. We've been thinking about a voyager/agent-style approach where an agent can start to learn "skills" where skills are individual blocks. Each skill represents a certain function applied to an image and based on a specific instruction set we should be able to craft a sequence of actions that will lead to that result.

One way to leverage that is building the graphs via a prompt, but another way might be to not think of the workflow as a pre-constructed graph at all. Rather perhaps we build dynamic graphs whenever you ask for a certain action - like a conversational image editing interface.

So you say something like make the woman's hair purple. We apply segmentation to the hair, and then add a puple color overlay exactly to that area.


I prototyped a system one year ago which created node-based, editable scene graphs in-browser with the help of an LLM-powered agent. It allowed the user to quickly construct complex 3D scenes.

It was some amazing tech with a ton of applications, but sadly leadership had other plans and pivoted to a highly derivative, slapped-together AI sex bot.


Theoretically yes, with a few limitations.

The walking animation is going to be a lost cause without specific inputs. We can do ControlNet stuff to make a character match a pose, and you can supply a series of poses that represent the walking animation.

On some level it seems silly to try and get anything to generate the workflow to do that. What you really want is a workflow to generate an image off of a pose, and then pass in the poses you want. Side tangent, I don’t know why the ai generation community has decided “workflow” is what they’re going to call “functions”?

After that your problem is that the results will be kind of meh. And that’s the brunt of where it’s at right now. You can make assets that satisfy descriptive conditions. But you can’t demand they be good. And you can’t demand they be consistent across different drawings. Can you hire an artist to fix your generated directionally correct assets? Yeah, maybe. Sounds depressing and error prone though.


Check out ComfyUI for a much more advanced and open source version of this.

https://github.com/comfyanonymous/ComfyUI


Not really an apples-to-apples comparison. ComfyUI is for diffusion-focused workflows, this is not.

Plus you don't need a local GPU for this. I realize this is a Pro for some Con for others, so there can be different products in the market serving different needs.


Sorry to write this comment: isn't it exactly like ComfyUI?


Builder here! We are inspired by ComfyUI.

I would say that although the form factors look similar, we are operating at a different abstraction level. ComfyUI focuses on components within the HuggingFace diffusers ecosystems and allow artist to recompose different workflow to come up with amazing visual effects.

We're trying to offer a way for people to recompose apps/apis with foundation models!


I think chaiNNer might be a better comparison, although both are used locally to process images while this looks like it's meant to easily build an API. I think it looks neat, I think a lot of people will find this very useful.


Thats true. We started off with a base set of blocks but i think the real utility will come in the easy orchestration and api end point building. We're pushing in the direction of apis and shareable workflows so hopefully some of these comparisons get clarified soon


Comfyui is just for interacting with stable diffusion.

This supports other models.


Thanks you beat me to it :)

That being said, you're not wrong. It's definitely inspired by ComfyUI. But, with much simpler abstractions, much broader utility and extensions like building a user front end coming up shortly


ComfyUI is adding lots of other models. And it's open source and much further along.

https://github.com/comfyanonymous/ComfyUI


Interesting! Always interesting to see the ideas in the air at the same time!

https://linzumi.com/

Definitely think this sort of idea could become the "serverless" equivalent for ml-using apps. I'm curious what you think re: versioning, consumption from various client languages, observability/monitoring/queueing, etc.? Feels like it could grow into a meaningful platform.


Yes! That's where our heads are at as well. The reality with a lot of multimodal / image proc style code is that it's never truly serverless - image manipulation in node.js is tragically bad so you always end up needing python endpoints to do it.

Re: version / client languages etc - right now we don't have block versioning but it's definitely going to be required. As of now the blocks are each their own endpoint, by design. We're thinking about allowing people to share their own blocks and perhaps even outsource compute to endpoint providers, while we focus on the orchstration laters.

Better observability and monitoring is definitely on the docket as well. Especially because some of these tasks take a really long time - some times even going past the expiry window of the REST api. We'll be switching over to queued jobs and webhooks


I had a chance to play around with the product and I really love the ease of creating a multi step workflow, to the point where I'm sure I can train my marketing team to use it. That being said, is there a way to share these workflow with others...either privately or publicly?


Nice! Sharing workflows is coming up in approximately 2 sprints. We're working on 2 flavors of sharing. The first is sharing the workflow directly and letting someone copy it for the dev community. The more interesting option though is the second, where we'll let you build a read-only dashboard that will just show inputs and outputs. that should be useful when you share it with a marketing team that doesn't need to mess around with the graph but would use the workflow for things like repetitive image editing tasks.


This is great and totally agree with the above comment. I think it's a really useful next step up from someone who is comfortable with prompts but wants a bit more control or having a resusable workflow. It'd be cool if there can also be more premade "recipes" as a starting point to modify/extend. Then hitting the play button gives you something right away.

Also kudos to whoever made the fun little tilt animations on hover ;)


We're working on shareable graphs and premade recipes! I actually started sharing a few on our blog - here's an example: https://blog.mlblocks.com/p/auto-generate-banner-images-for-...

haha, the tilt animations are a by-product of my obsession with Trello. :)


This is amazing! Really helps with those of us who are just getting started building workflows for AI images for ecom. Are there any plans to add bulk processing capabilities, allowing workflows to run on multiple images automatically without manual UI interactions?


We started this to solve bulk processing issues we had when building a previous eCommerce tool so I 100% know what you mean. We're adding API support soon and we'll add some examples of how to connect this to Shopify or something like Airtable/ Strapi / Retool etc for workflow automations


Love the idea, however, it's not clear whether I will get access to a large collection of components for building such workflows or what is currently possible? Would nice to get this info before proceeding with auth.


Theoretically, most OpenCV-type image pre/post-processing stuff is available in blocks and then all the major multi-modal + diffusion AI blocks are also available. As a sampling of what we've recently added:

AI Blocks: - Multimodal LLM (GPT4v)

- Remove objects in Images

- AI Upscale 4x

- Prompted Segmentation (SAM w/ text prompting)

Editing Blocks: - Change format

- Rotate

- Invert Color

- Blur

- Resize

- Mask to Alpha

If we've missed something please let us know, we just went through a big exercise in making sure we can quickly add new blocks.


Is this all AI or using something like Imagemagick for the lower level tasks?


It's a combination of things. The idea is that you can build workflows that chain functionality from ai models, as well as lower level image processing tasks. For lower level tasks we use the usual suspects - PIL, ImageMagik, OpenCV etc.


To add to pj's comment -

We are adding more blocks constantly. We're also considering allowing the community to push their own blocks using an open api schema.


Hey, this is really cool!

A small suggestion, I don't think ML is a memorable term for non-technical people. I would prob. try a different name.


Thanks! We'll ponder over this one :)


This concept is widely used in video editing and visual effects.


YEP! If you've used blender you'll notice the parallels with shader nodes :)


Very cool, looking forward to seeing this evolve


Thank you!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: