GPU Glossary
GPU Glossary
/host-software/cuda-c

What is the CUDA C++ programming language?

CUDA C++ is an implementation of the CUDA programming model as an extension of the C++ programming language.

CUDA C++ adds several features to C++ to implement the CUDA programming model , including:

  • Kernel definition with __global__. CUDA kernels are implemented as C++ functions that take in pointers and have return type void, annotated with this keyword.
  • Kernel launches with <<<>>>. Kernels are executed from the CPU host using a triple bracket syntax that sets the thread block grid dimensions.
  • Shared memory allocation with the shared keyword, barrier synchronization with the __syncthreads() intrinsic function, and thread block and thread indexing with the blockDim and threadIdx built-in variables.

CUDA C++ programs are compiled by a combination of host C/C++ compiler drivers like gcc and the NVIDIA CUDA Compiler Driver , nvcc.

For information on how to use CUDA C++ on Modal , see this guide .

Modal LogoBuilding on GPUs? We know a thing or two about it.

Modal is an ergonomic Python SDK wrapped around a global GPU fleet. Deploy serverless AI workloads instantly without worrying about quota requests, driver compatibility issues, or managing bulky ML dependencies.

Deploy on GPUs
Something seem wrong?
Or want to contribute?

Click this button to
let us know on GitHub.