Cloud functions reimagined

Run generative AI models, large-scale batch jobs, job queues, and much more.

Bring your own code — we run the infrastructure.

pip install modal
python3 -m modal setup

1

2

3

4

5

6

7

> 



Customers run Modal to power data-intensive applications

Engineered for large-scale workloads

We built a container system from scratch in Rust for the fastest cold-start times. Scale to hundreds of GPUs and back down to zero in seconds, and pay only for what you use.

GPU Containers
Enqueued
Startup
Execution
Status
10:44:15 AM
0.0s
-
Pending
10:44:11 AM
0.0s
2.0s
Succeeded
10:44:10 AM
0.0s
2.0s
Succeeded
10:44:09 AM
1.0s
2.3s
Succeeded

Iterate at the speed of thought

Deploy functions to the cloud in seconds, with custom container images and hardware requirements. Never write a single line of YAML.


Everything your app needs

Environments
Express container images and hardware specifications entirely in code.
Say goodbye to Dockerfiles and YAML.
Storage
Provision network file systems, key-value stores and queues with ease.
Use powerful cloud primitives that feel like regular Python.
Job scheduling
Turn functions into cron jobs with a single line of code.
Spawn compute intensive jobs without blocking your backend.
Web endpoints
Serve any function as an HTTPS endpoint.
Ship to your own custom domains.
Observability
Monitor executions, logs and metrics in real time.
Debug interactively with modal shell.
Security
Secure your workloads with our battle-tested gVisor runtime.
Industry-standard SOC 2 compliance.

Ramp uses Modal to run some of our most data-intensive projects. Our team loves the developer experience because it allows them to be more productive and move faster. Without Modal, these projects would have been impossible for us to launch.
Karim Atiyeh, CTO, Ramp
Substack recently launched a feature for AI-powered audio transcriptions. The data team picked Modal because it makes it easy to write code that runs on hundreds of GPUs in parallel, transcribing podcasts in a fraction of the time.
Mike Cohen, Head of Data, Substack

Only pay for what you use

Scale up to hundreds of nodes and down to zero within seconds. Pay for actual compute, by the CPU cycle.

See pricing

Compute Costs

Per second Per hour

CPU

$0.0000533 / core / sec

GPU

Nvidia A100, 40 GB

$0.001036 / sec

Nvidia A100, 80 GB

$0.001553 / sec

Nvidia A10G

$0.000306 / sec

Nvidia L4

$0.000291 / sec

Nvidia T4

$0.000164 / sec

Memory

$0.00000667 / GiB / sec


Join Modal's developer community

special shout out to @modal_labs and @_hex_tech for providing the crucial infrastructure to run this! Modal is the coolest tool I’ve tried in a really long time— cannnot say enough good things.

The Tech Stack you need to build powerful apps. Frontend: @nextjs Backend: @supabase Deploy: @vercel Data Processing: @modal_labs The beauty of this stack is that you can start for FREE

Shoutout to @modal_labs, which I used to run the @OpenAI Whisper models to transcribe the audio. Appreciate the previous commenters who recommended it! It was easy to parallelize around ~80 containers so a 90+ min podcast could be transcribed in under a minute🤯🤯🤯 [3/4]

@modal_labs (modal.com): the easiest way to run stuff in the cloud. Honestly it's mind-blowing. Thanks @bernhardsson!

@modal_labs is a blessing built by the Cloud Computing deity to bring joy and love for our lives. I've never seen anything like it, but it is the best PaaS/SaaS/ Whatever-a-a-S I've ever used.

Something like this would of been basically impossible to do so quickly without @modal_labs, since I'd have to learn ML infra on gcp/aws, auto scaling, managing gpu infra, etc. It's autoscaled by them, so I can easily tune 5,10,15 models at a time. Don't have to manage A100s

Modal has already completely changed the way I interact with the cloud. It's so fast that I skip the local dev environment and just develop my code in the cloud from the start of a project. 1/

If you are still using AWS Lambda instead of @modal_labs you're not moving fast enough

@modal_labs wow... this *just works*! ~10 mins all said and done to deploy my model

Yeah @modal_labs is awesome. Fastest and easiest way to deploy and schedule any python code. Documentation is excellent, slack very responsive and they have excellent examples!

Been using @modal_labs lately and it really is as good as people say. For the small project I'm building, it's the first tool that finds the right abstraction level for remotely executing code.

Ship your first app in minutes

with 100+ hours of free compute