Companies building differentiated AI products often have proprietary ML components and workflows that can be challenging to deploy. Using Modal, OpenArt was able to build and scale complex image generation pipelines without compromising on customizability.
About OpenArt
OpenArt is a platform for AI image generation and editing. With a lean and agile team of engineers, OpenArt has grown into a platform with over 3 million monthly active users globally. Users on OpenArt can generate images with popular models, use proprietary fine-tunes to achieve certain visual effects, and share their own customized workflows with the community.
Inflexibility of model API providers
OpenArt found that while API providers like Replicate and Fireworks worked for vanilla text-to-image generation, they were too inflexible for customized pipelines. OpenArt’s “secret sauce” was in their proprietary ComfyUI* workflows, which were behind sophisticated features (e.g. style transfer, facial expression editing, background extension) in their advanced image editing suite. These workflows were part of complex pipelines that involved much more than just sending a request to an image model. API providers, however, were built for basic use cases—even the ones that supported ComfyUI deployments didn’t support fully customizable workflows.
*ComfyUI is an open-source tool for assembling and executing advanced image generation pipelines. Below is a sample ComfyUI workflow with many interconnected nodes.
Hours-long deployment cycles on GCP
On the other side of the spectrum, OpenArt also tried setting up their own infrastructure on AWS and GCP. This was an incredibly slow process, however. It took hours to redeploy every time they made a change to their ComfyUI pipeline. This was due to the complex configuration interfaces, long wait times to acquire and spin up GPUs, and quota limits. As a fast-moving startup, OpenArt found this to be a dealbreaker.
Fast, programmatic infrastructure on Modal
When exploring alternative solutions, Modal struck the balance between supporting OpenArt’s custom, high-code needs while still providing a clean developer experience to deploy and scale their product.
After iterating on a ComfyUI workflow locally, they could easily deploy it on Modal by:
- Defining a container image in Python that had their custom ComfyUI nodes installed.
- Defining a couple functions — one to launch ComfyUI as a subprocess, and one to execute a given ComfyUI workflow.
- Wrapping those functions in a Modal decorator that defined GPU requirements, the custom image, and a file mount containing the ComfyUI workflow.
- Using
modal deploy
on the command line to deploy the functions to the cloud.
By leveraging Modal’s lightweight, programmatic solution rather than a traditional setup on GCP, OpenArt was able to deploy in minutes rather than hours. This empowered the team to focus on innovation instead of infrastructure. Coco Mao, CEO of OpenArt, remarked, “We believe in building solutions that are not just powerful but also elegant in their simplicity. That’s what sets us apart.”
And because Modal worked with arbitrary code, images, and files, OpenArt’s engineering team was able to seamlessly integrate all of their proprietary ComfyUI workflows into the product. Complex pipelines were turned into sophisticated yet user-friendly image tools without overloading their tech stack.
As their customer base continued to grow, Modal’s autoscaling capabilities became a key piece of their infrastructure. When users were busy on the OpenArt platform, Modal scaled up to hundreds of GPU containers running their ComfyUI workflows without any additional work required on their part. Modal now powers 100+ workflows on OpenArt’s platform.
An oversized team isn’t required to build scalable, custom AI solutions. OpenArt’s compact team demonstrates that smart tools and a drive for focus and simplicity are more than enough. Modal exists to help customers like OpenArt launch and scale the next generation of sophisticated AI products.