Make music with ACE-Step
In this example, we show you how you can run ACE Studio’s ACE-Step music generation model on Modal.
We’ll set up both a serverless music generation service and a web user interface.
Setting up dependencies
We start by defining the environment our generation runs in. This takes some explaining since, like most cutting-edge ML environments, it is a bit fiddly.
This environment is captured by a container image,
which we build step-by-step by calling methods to add dependencies,
like apt_install to add system packages and pip_install to add
Python packages.
Note that we don’t have to install anything with “CUDA”
in the name — the drivers come for free with the Modal environment
and the rest gets installed pip. That makes our life a lot easier!
If you want to see the details, check out this guide in our docs.
In addition to source code, we’ll also need the model weights.
ACE-Step integrates with the Hugging Face ecosystem, so setting up the models
is straightforward. ACEStepPipeline internally uses the Hugging Face model hub
to download the weights if not already present.
But Modal Functions are serverless: instances spin down when they aren’t being used. If we want to avoid downloading the weights every time we start a new instance, we need to store the weights somewhere besides our local filesystem.
So we add a Modal Volume to store the weights in the cloud. For more on storing model weights on Modal, see this guide.
We don’t need to change any of the model loading code — we just need to make sure the model gets stored in the right directory.
To do that, we set an environment variable that Hugging Face expects
(and another one that speeds up downloads, for good measure)
and then run the load_model Python function.
While we’re at it, let’s also define the environment for our UI. We’ll stick with Python and so use FastAPI and Gradio.
This is a totally different environment from the one we run our model in. Say goodbye to Python dependency conflict hell!
Running music generation on Modal
Now, we write our music generation logic.
- We make an App to organize our deployment.
- We load the model at start, instead of during inference, with
modal.enter, which requires that we use a ModalCls. - In the
app.clsdecorator, we specify the Image we built and attach the Volume. We also pick a GPU to run on — here, an NVIDIA L40S.
We can then generate music from anywhere by running code like what we have in the local_entrypoint below.
You can execute it with a command like:
Pass in --help to see options and how to use them.
Hosting a web UI for the music generator
With the Gradio library, we can create a simple web UI in Python that calls out to our music generator, then host it on Modal for anyone to try out.
To deploy both the music generator and the UI, run