AI infrastructure that
developers love
Sub-second container starts
We built a Rust-based container stack from scratch so you can iterate as quickly in the cloud as you can locally.
View DocsZero config files
Easily define hardware and container requirements next to your Python functions.
View DocsScale to hundreds of GPUs in seconds
Never worry about hitting rate limits again. We autoscale containers for your functions instantly.
View DocsUse Cases
Features
code is running
Compute costs
GPU Tasks
B200
$0.001736 / sec
H200
$0.001261 / sec
H100
$0.001097 / sec
A100-80GB
$0.000694 / sec
A100-40GB
$0.000583 / sec
L40S
$0.000542 / sec
A10G
$0.000306 / sec
L4
$0.000222 / sec
T4
$0.000164 / sec
CPU
Physical core
(2 vCPU equivalent)
$0.0000131 / core / sec
*minimum of 0.125 cores per container
Memory
$0.00000222 / GiB / sec
of all scales
Security and governance
Built with Modal
“Modal Sandboxes enable us to execute generated code securely and flexibly. We expedited the development of our code interpreter feature integrated into Le Chat.”
Wendy Shang, AI Scientist
“Modal makes it easy to write code that runs on 100s of GPUs in parallel, transcribing podcasts in a fraction of the time.”
Mike Cohen, Head of Data
“Tasks that would have taken days to complete take minutes instead. We’ve saved thousands of dollars deploying LLMs on Modal.”
Rahul Sengottuvelu, Head of Applied AI
“The beauty of Modal is that all you need to know is that you can scale your function calls in the cloud with a few lines of Python.”
Georg Kucsko, Co-founder and CTO
community
If you building AI stuff with Python and haven't tried @modal_labs you are missing out big time
@modal_labs continues to be magical... 10 minutes of effort and the `joblib`-based parallelism I use to test on my local machine can trivially scale out on the cloud. Makes life so easy!
This tool is awesome. So empowering to have your infra needs met with just a couple decorators. Good people, too!
Recently built an app on Lambda and just started to use @modal_labs, the difference is insane! Modal is amazing, virtually no cold start time, onboarding experience is great 🚀
Probably one of the best piece of software I'm using this year: modal.com
feels weird at this point to use anything else than @modal_labs for this — absolutely the GOAT of dynamic sandboxes
Nothing beats @modal_labs when it comes to deploying a quick POC
Late to the party, but finally playing with @modal_labs to run some backend jobs. DX is sooo nice (compared to Docker, Cloud Run, Lambda, etc). Just decorate a Python function and deploy. And it's fast! Love it.
If you building AI stuff with Python and haven't tried @modal_labs you are missing out big time
@modal_labs continues to be magical... 10 minutes of effort and the `joblib`-based parallelism I use to test on my local machine can trivially scale out on the cloud. Makes life so easy!
This tool is awesome. So empowering to have your infra needs met with just a couple decorators. Good people, too!
Recently built an app on Lambda and just started to use @modal_labs, the difference is insane! Modal is amazing, virtually no cold start time, onboarding experience is great 🚀
Probably one of the best piece of software I'm using this year: modal.com
feels weird at this point to use anything else than @modal_labs for this — absolutely the GOAT of dynamic sandboxes
Nothing beats @modal_labs when it comes to deploying a quick POC
Late to the party, but finally playing with @modal_labs to run some backend jobs. DX is sooo nice (compared to Docker, Cloud Run, Lambda, etc). Just decorate a Python function and deploy. And it's fast! Love it.
Bullish on @modal_labs - Great Docs + Examples - Healthy Free Plan (30$ free compute / month) - Never have to worry about infra / just Python
@modal_labs has got a bunch of stuff just worked out this should be how you deploy python apps. wow
If you are still using AWS Lambda instead of @modal_labs you're not moving fast enough
special shout out to @modal_labs and @_hex_tech for providing the crucial infrastructure to run this! Modal is the coolest tool I’ve tried in a really long time— cannnot say enough good things.
I use @modal_labs because it brings me joy. There isn't much more to it.
I have tried @modal_labs and am now officially Modal-pilled. Great work @bernhardsson and team. Every hyperscalar should be trying this out and immediately pivoting their compute teams' roadmaps to match this DX.
I've realized @modal_labs is actually a great fit for ML training pipelines. If you're running model-based evals, why not just call a serverless Modal function and have it evaluate your model on a separate worker GPU? This makes evaluation during training really easy.
Bullish on @modal_labs - Great Docs + Examples - Healthy Free Plan (30$ free compute / month) - Never have to worry about infra / just Python
@modal_labs has got a bunch of stuff just worked out this should be how you deploy python apps. wow
If you are still using AWS Lambda instead of @modal_labs you're not moving fast enough
special shout out to @modal_labs and @_hex_tech for providing the crucial infrastructure to run this! Modal is the coolest tool I’ve tried in a really long time— cannnot say enough good things.
I use @modal_labs because it brings me joy. There isn't much more to it.
I have tried @modal_labs and am now officially Modal-pilled. Great work @bernhardsson and team. Every hyperscalar should be trying this out and immediately pivoting their compute teams' roadmaps to match this DX.
I've realized @modal_labs is actually a great fit for ML training pipelines. If you're running model-based evals, why not just call a serverless Modal function and have it evaluate your model on a separate worker GPU? This makes evaluation during training really easy.