Running CUDA in Google Colab

Before reading the lines below, immediately change the runtime to use the GPU as hardware accelerator. This step is easily forgotten.

Now, we can begin :-)

CUDA comes already pre-installed in Google Colab. Some information on Google Colab are reported in the post “PyCUDA, Google Colab and the GPU”.

To check this, simply open a new notebook and type

!nvcc —-version

which will return the installed CUDA version, for example [as of March, 3, 2021]

nvcc: NVIDIA (R) Cuda compiler driver Copyright © 2005–2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:09_PDT_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.TC445_37.28845127_0

As it can be seen, the installation regards CUDA 11.0.

To enable CUDA programming and execution directly under Google Colab, you can install the nvcc4jupyter plugin as

!pip install git+git://github.com/andreinechaev/nvcc4jupyter.git

After that, you should load the plugin as

%load_ext nvcc_plugin

and write and run CUDA code by adding the

%%cu

prefix. Compilation and execution occurs when pressing the play button to run a code cell.

This is an example:

%%cu#include <stdio.h>#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true) { 
if (code != cudaSuccess) {
fprintf(stderr,”GPUassert: %s %s %d\n”, cudaGetErrorString(code), file, line);if (abort) exit(code); } }__global__ void add(int a, int b, int *c) {*c = a + b; }int main() {// — — Host declarations and initializationsint a, b, c;a = 2;b = 6;// — — Device allocationsint *d_c; gpuErrchk(cudaMalloc(&d_c, sizeof(int)));// — — Kernel executionadd<<<1,1>>>(a, b, d_c);gpuErrchk(cudaPeekAtLastError());gpuErrchk(cudaDeviceSynchronize());// — — Moving the results from device to hostgpuErrchk(cudaMemcpy(&c, d_c, sizeof(int), cudaMemcpyDeviceToHost));// — — Results printoutprintf(“%d + %d is %d\n”, a, b, c);gpuErrchk(cudaFree(d_c));return 0; }

Notice that Google Colab currently provides the newer T4 or P100 GPU or the older K80 if other GPUs are not available. CUDA 11 show only a partial support for the 3.x architecture.

Google Colab has a multiple installations of CUDA. To check that, just type

%cd /usr/local/

and check that you are in the general CUDA installation directory by

!pwd

List all the files by

!ls

You should receive the following

As you can see, CUDA 10.0, 10.1 and 11.0 are installed.

If you need to change the active CUDA version (due, for example, to compatibility issues with a K80 card), just delete the soft link and re-establish it to the desired CUDA version, for example, CUDA 10.1:

!rm -rf cuda!ln -s /usr/local/cuda-10.1 /usr/local/cuda

To check that the soft link has been correctly established, just type

!stat cuda

You should receive something like

If you need to install a different CUDA version, for example, CUDA 9.2, the receipt is the following:

!wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604–9–2-local_9.2.88–1_amd64.deb!dpkg -i cuda-repo-ubuntu1604–9–2-local_9.2.88–1_amd64.deb!apt-key add /var/cuda-repo-9–2-local/7fa2af80.pub!apt-get update!apt-get install cuda-9.2

You will now find a cuda-9.2 directory under /usr/local/.

--

--

--

We are teaching, researching and consulting parallel programming on Graphics Processing Units (GPUs) since the delivery of CUDA. We also play Matlab and Python.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

DevTalk: Logstash Aggregations

Coding Bootcamp FAQ Part 2

Approaching Kotlin from Three Angles

TeamCity Upgrade

TryhackMe -Polkit: CVE-2021–3560

🌟AMA with YFiOne🌟

Boilerplate in programming

A guide into regular expressions (Regex)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Vitality Learning

Vitality Learning

We are teaching, researching and consulting parallel programming on Graphics Processing Units (GPUs) since the delivery of CUDA. We also play Matlab and Python.

More from Medium

Developing a Basketball Minimap for Player Tracking using Broadcast Data and Applied Homography

Data Labeling 101: An Introduction to Annotation Techniques for Computer Vision

Performance Analysis: YoloV5 vs YoloR

Step by step guide to setup Tensorflow with GPU support on windows 10