I’ve been playing around with ComfyUI for a while now and built my fair share of apps using it as an API, both for my own projects and others.
To simplify this process, one of my crazy friends and I just launched a cloud solution to deploy any ComfyUI workflow on scalable infrastructure. It works with any node and model.
You just have to upload your workflow and the system detects all the custom nodes and will install them automatically. It will also download all the models that it recognizes. If it can't find a model, because you are using a custom LoRA for example, it will ask for a download link and add it to the deployment.
We have 7 different GPUs available to cover all use cases:
- T4 (16GB VRAM)
- L4 (24GB VRAM)
- A10G (24GB VRAM)
- A100 (40GB VRAM)
- A100-80GB (40GB VRAM)
- L40S (48GB VRAM)
- H100 (80GB VRAM)
We’ve optimized this solution to work with the open-source app builder for Comfy workflows we released a few months ago: https://github.com/ViewComfy/ViewComfy. The idea is that you can turn a workflow into a web app running in the cloud in just a few minutes.
To make this new project as usual as possible, the deployments can also be accessed via APIs that can easily be integrated into existing apps.
You can get started right away: https://app.viewcomfy.com/ =)