Running CUDA in Google Colab

Before reading the lines below, immediately change the runtime to use the GPU as hardware accelerator. This step is easily forgotten.

Now, we can begin :-)

CUDA comes already pre-installed in Google Colab. Some information on Google Colab are reported in the post “PyCUDA, Google Colab and the GPU”.

To check this, simply open a new notebook and type

which will return the installed CUDA version, for example [as of July, 3, 2022]

As it can be seen, the installation regards CUDA 11.1.

To enable CUDA programming and execution directly under Google Colab, you can install the nvcc4jupyter plugin as

After that, you should load the plugin as

and write the CUDA code by adding the

prefix. This is an example:

Once this has been done, a file named has been created in the /content/src directory.

The created file must be then compiled. To do so, we should check the architecture of the GPU that Google Colab is making us available. The simplest way to pursue this task is installing PyCUDA as

and using the following snippet

to inquiry the device properties. The GPU made available at the time of writing has compute capability 7.5 .

Accordingly, compilation can be done as follows:

Finally, the code can be run as

Please, be adviced that include files with .cuh extension are not admitted. Only use include file names with .h extension.

A fully worked example is available at the following link. Notice that the same approach will work also on notebooks different from that offered by Google Colab, e.g., the Jupyter one offered by Amazon Web Services.

Notice that Google Colab currently provides the newer T4 or P100 GPU or the older K80 if other GPUs are not available. CUDA 11 show only a partial support for the 3.x architecture.

Google Colab has multiple installations of CUDA. To check that, just type

and ensure that you are in the general CUDA installation directory by

List all the files by

You should receive the following:

As you can see, CUDA 10.0, 10.1, 11.0 and 11.1 are installed.

If you need to change the active CUDA version (due, for example, to compatibility issues of newer CUDAs with older — e.g., K80 — cards), just delete the soft link to the current version and re-establish it to the desired CUDA release, for example, CUDA 10.1:

To check that the soft link has been correctly established, just type:

You should receive something like:

If you need to install a different CUDA version, for example, CUDA 9.2, the receipt is the following:

You will now find a cuda-9.2 directory under /usr/local/.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store