Modal Code Playground

Welcome to the Modal code playground!

This is an interactive introduction to Modal that walks you through easy-to-understand example programs you can run and modify from your browser.

When you click Run, Modal takes your code, puts it in a container, and executes it in the cloud. Learn more about how the playground works here.

You can also consult our guides to learn more about different Modal features and examples for starting points tailored to your specific use case. You’ll need to download the Python library (pip install modal) and create an API token (python3 -m modal setup) to run your code locally. If at any point you get stuck or have a question, please ask us in the Modal Community Slack. We’re here to help!


Get started with GPU acceleration

In this tutorial, we walk you through running a GPU-accelerated function to show how Modal makes running code remotely as easy, cost-effective, and performant as possible.

Run a Modal function

Let’s say we have a simple Python function check_gpus that lists the system’s GPUs by running nvidia-smi.

There are just a couple things we do to run this function on Modal:

1. Create a modal.App

An App is a group of functions and classes that are run together in Modal.

2. Wrap a function with @app.function

Functions are the basic unit of serverless execution on Modal.

To make any Python function work with Modal, we just wrap it in a decorator @app.function and register the function with our App. We can easily attach a GPU to our function by passing in the gpu argument to our decorator.

3. Define a CLI entrypoint with @app.local_entrypoint

We should define a local entrypoint to our App. In our entrypoint function, we use two methods to call our Modal Function check_gpus from this entrypoint:

  • .local, which executes the function in the same environment as the calling function
  • .remote, which runs the function remotely on Modal

We’ve done these steps for you, so all you need to do is hit the Run button.

Once your program is running, you should see the App initialize in the logs. You should then see the printed logs as main is executed, and the logs of check_gpus as it is run locally and then remotely on Modal in a GPU-enabled container. This is an ephemeral app that is automatically terminated when the program is done.

🎉 Congratulations, you’ve just run your first function on Modal!

Choose your hardware

Modal offers a wide range of GPU types to choose from, including H100s, A100s, A10Gs, and more, with varying constraints and pricing.

Try running your function in a container with two T4s:

@app.function(gpu="T4:2")  
def check_gpus():
    ...

You should see nvidia-smi reports two GPUs, each with a different UUID.

Run a more interesting function

All the code in this tutorial is standard library Python.

To run more useful functions, you’ll probably need external packages and libraries. In our next tutorial, we’ll walk you through customizing your container environment to use popular ML frameworks like transformers and torch, so that you’ll be able to run any GPU-accelerated workload on Modal.


$ modal run get_started.py