“We fine-tune image models on Modal because we can experiment with new ideas quickly. We chose to deploy on Modal too because it's far more stable than any alternative solutions we found.”
“Modal is the easiest way to experiment as we develop new fine-tuning techniques. We’ve been able to validate new features faster and beat competitors because of how quickly we can try new ideas.”
Define and share reproducible environments
Define your requirements in code and let your team run the same without having to set up a local environment.
Fast hyperparameter sweeps
Scale up to hundreds of multi-GPU fine-tuning runs in just a few seconds with a single function call.
Flexible framework integration
Use your favorite ML fine-tuning frameworks, like Hugging Face, PyTorch, and Axolotl…or write your own training loop.
Integration with popular tools
Monitor experiment results with Weights and Biases and visualize training progress using TensorBoard.
Real-time resource metrics
Access detailed performance data from your Modal dashboard.
Efficient data management
Store private datasets and model weights in Modal Volumes as easily as writing to local disk.
End-to-end model lifecycle
Deploy your functions for data processing, fine-tuning, and serving, all on Modal.
Use Cases