High-performance
AI infrastructure

Serverless cloud for AI, ML, and data applications – built for developers
Get Started

Cloud development made frictionless

Run generative AI models, large-scale batch jobs, job queues, and much more. Bring your own code — we run the infrastructure.

View Docs

Iterate at the speed of thought

Make code changes and watch your app rebuild instantly. Never write a single line of YAML again.

View Docs

Built for large-scale workloads

Engineered in Rust, our custom container stack allows you to scale to hundreds of GPUs and then back down to zero in seconds. Pay only while it's running.

View Docs

Use Cases

Generative AI Inference that scales with you




View Examples

Fine-tuning and training without managing infrastructure

Fine-tuning graphic



View Examples

Batch processing optimized for high-volume workloads

Batch processing graphic



View Examples

Features







Only pay when your
code is running
Scale up to hundreds of nodes and down to zero within seconds. Pay for actual compute, by the CPU cycle. With $30 of compute on us, every month.

Compute costs


GPU Tasks

Nvidia H100

$0.001267 / sec

Nvidia A100, 80 GB

$0.000944 / sec

Nvidia A100, 40 GB

$0.000772 / sec

Nvidia A10G

$0.000306 / sec

Nvidia L4

$0.000222 / sec

Nvidia T4

$0.000164 / sec


CPU

Physical core
(2 vCPU)

$0.000038 / core / sec

*minimum of 0.125 cores per container


Memory

$0.00000667 / GiB / sec

For teams
of all scales
Starter
For small teams and independent developers looking to level up.
Team
For startups and larger organizations looking to scale quickly.
Enterprise
For organizations prioritizing security, support, and reliability.

Security and governance





Learn More

Built with Modal

“Modal makes it easy to write code that runs on 100s of GPUs in parallel, transcribing podcasts in a fraction of the time.”

Mike Cohen, Head of Data

“Tasks that would have taken days to complete take minutes instead. We’ve saved thousands of dollars deploying LLMs on Modal.”

Rahul Sengottuvelu, Head of Applied AI

“The beauty of Modal is that all you need to know is that you can scale your function calls in the cloud with a few lines of Python.”

Georg Kucsko, Co-founder and CTO

Case Study
Join Modal's developer
community
Modal Community Slack

If you building AI stuff with Python and haven't tried @modal_labs you are missing out big time

@modal_labs continues to be magical... 10 minutes of effort and the `joblib`-based parallelism I use to test on my local machine can trivially scale out on the cloud. Makes life so easy!

This tool is awesome. So empowering to have your infra needs met with just a couple decorators. Good people, too!

Modal has the most magical onboarding I've ever seen and it's not even close. And Erik's walk through of how they approached it is a Masterclass.

special shout out to @modal_labs and @_hex_tech for providing the crucial infrastructure to run this! Modal is the coolest tool I’ve tried in a really long time— cannnot say enough good things.

I use @modal_labs because it brings me joy. There isn't much more to it.

I have tried @modal_labs and am now officially Modal-pilled. Great work @bernhardsson and team. Every hyperscalar should be trying this out and immediately pivoting their compute teams' roadmaps to match this DX.

I've realized @modal_labs is actually a great fit for ML training pipelines. If you're running model-based evals, why not just call a serverless Modal function and have it evaluate your model on a separate worker GPU? This makes evaluation during training really easy.

If you building AI stuff with Python and haven't tried @modal_labs you are missing out big time

@modal_labs continues to be magical... 10 minutes of effort and the `joblib`-based parallelism I use to test on my local machine can trivially scale out on the cloud. Makes life so easy!

This tool is awesome. So empowering to have your infra needs met with just a couple decorators. Good people, too!

Modal has the most magical onboarding I've ever seen and it's not even close. And Erik's walk through of how they approached it is a Masterclass.

special shout out to @modal_labs and @_hex_tech for providing the crucial infrastructure to run this! Modal is the coolest tool I’ve tried in a really long time— cannnot say enough good things.

I use @modal_labs because it brings me joy. There isn't much more to it.

I have tried @modal_labs and am now officially Modal-pilled. Great work @bernhardsson and team. Every hyperscalar should be trying this out and immediately pivoting their compute teams' roadmaps to match this DX.

I've realized @modal_labs is actually a great fit for ML training pipelines. If you're running model-based evals, why not just call a serverless Modal function and have it evaluate your model on a separate worker GPU? This makes evaluation during training really easy.

Ship your first app in minutes.

Get Started

$30 / month free compute