Modal for Academics
Modal is the fastest way for researchers to develop tomorrow's cutting-edge AI models and machine learning methods. Instantly deploy experiments on the most powerful GPUs with just a few lines of code. Iterate faster with Modal.
Why choose Modal for your research?
Program Credits
Unlock up to $10k of credits to be used on Modal to supercharge your project
Instant GPU Access
Run workloads on B200s, H100s, and more without requesting quota
Pay As You Compute
Modal provisions and scales GPUs as needed, and you only pay for what you use
Iterate Faster
Modal's developer-first Python API, sub-second container start times, and Notebooks tool let you iterate faster than ever
Stay Unblocked
Run massive experiments when you need to on state-of-the-art infrastructure
Research Support
Direct access to the Modal team of engineers and PhDs for research support
How Credits Work
- Credits expire after conference submission results are finalized
- Credits are automatically applied towards compute usage - not subscription fees
- Credits are granted once per conference
Educators looking to use Modal to teach a course - please reach out to partnerships@modal.com
“Verifying an LLM-based approach that relies on test-time compute can be challenging to scale. Through collaboration with ARC Prize, the MIT + Cornell team partnered with Modal to provide both credits and infrastructure to make this possible. Huge thanks to Modal for working with us to spin up an environment that efficiently ran our model for verification.”
Kevin Ellis & Zenna Tavares, Researchers
“Check out Tokasaurus on Modal to make Llama-1B brrr! This repeated sampling example shows off two engine features that are important for serving small models: very low CPU overhead and automatic shared prefix exploitation with Hydragen.”
Jordan Juravsky, Researcher
“We wouldn't have been able to publish Four Over Six on such a tight deadline if it weren't for Modal! We needed to run hundreds of experiments on B200s to fill out our evaluation tables, and Modal made it super easy to run them all in parallel.”
Jack Cook, Researcher