Customize open weights models with your data. Own the means of model production with Modal's scalable, containerized infrastructure.
Get startedFinish your hyperparameter sweeps 100x faster. Scale up to hundreds of multi-GPU training runs in just a few seconds with a single function call, only paying for the compute you use.
Store private datasets and model weights in modal.Volumes as easily as writing to local disk.
Use your favorite ML frameworks, like Hugging Face, PyTorch, and axolotl, or write your own code. Own your models and focus on making them better, not writing glue YAML.
Monitor experiment results with your go-to tools like Weights and Biases or Tensorboard, along with real-time resource metrics from your Modal dashboard. Easily run model-based evaluations without slowing down training.
Seamlessly integrate your data processing, model training, and serving functions — all with different hardware requirements and image definitions — on Modal. Break the wall between research and production.
Fine-tune LLMs on multiple GPUs with all the bells and whistles.
A serverless Slack bot with a fine-tuned LLM that sounds like you.
Fine-tune Stable Diffusion to generate images of your pet in any art style.
Serve hundreds of LoRA fine-tunes from S3.