Photorealism is a technique aiming at obtaining artificial images indistinguishable from natural ones.
Photorealistic images require the full simulation of light propagation, reflection and refraction from a light source, passing through the surfaces involved in the illumination until the light reaches the human eye. All the electromagnetic properties of a surface, like colour, reflectivity and opacity must be properly taken into account in the optical model. Ray tracing is the ultimate step towards photorealism.
From a practical point of view, to assign a colour to a pixel, inverse ray tracing is operated. The inverse path of light is…
The CUDA workflow followed by many programmers consists of writing a code by distributing it in various .cpp and .cu files, where the .cu files contain the __global__ functions, while the.cpp files contain allocations of memory GPU spaces worked out by cudaMalloc, memory movements from host to device and vice versa performed by cudaMemcpy and __global__ function invokations executed by the
syntax. These files are compiled and then executed.
In some cases, it is convenient or necessary to compile the __global__ functions at run-time, using the so-called Just-In-Time (JIT) compilation, instead of doing it in advance at compile-time. The…
Download the latest version of the NVIDIA Optix SDK. As of April 2021, the latest version is 7.2.0. Check the driver requirements. Version 7.2.0 requires NVIDIA R456.71 driver or newer for Windows. The download requires setting up an NVIDIA developer account.
Execute the NVIDIA-OptiX-SDK-7.2.0-win64.exe file. It will create the C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.2.0\SDK directory. As of April 2021, the latest Cmake version is 3.20.0.
From the binary distributions at https://cmake.org/download/, download the .msi file and install Cmake.
After having installed Cmake, run cmake-gui.
Create a build directory for Optix, C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.2.0\SDK\build for example. …
Recently, in the story The evolution of a GPU: from gaming to computing, the hystorical evolution of CPUs and GPUs has been discussed and how the GPUs can be significantly more powerful than commercial CPUs underlined. We now ask ourself why PCs are still based on CPUs and not entirely made of GPUs.
The answer is that a CPUs work in a totally different way than GPUs and the figure below helps us understanding the main differences.
The color convention is that green represents the computational units, or cores, orange the memories and yellow the control units.
Computational units (cores)
In this post, we provide an introduction to the PyCUDA library and to the Google Colaboratory environment and a short PyCUDA unit sample that can be also run on Google Colab.
What is PyCUDA?
PyCUDA is a library developed by Andreas Klöckneret al. allowing to write CUDA codes and compiling, optimizing and using them as ordinary Python functions in a totally transparent way to the User. The User does not need to manage the CUDA compiler unless he explicitly requests it.
PyCUDA uses the concept of GPU run-time code generation (RTCG) enabling the execution oflow-level code launched by the high-level…
The above image shows the increase of computing power in time concerning CPUs and GPUs and highlights how the computational capabilities of GPUs grow faster than those of CPUs.
Today, the number of Floating Point OPerations (FLOP) per second (FLOPS) performed by a GPU can be easily reach the tents of Tera FLOPS, namely, the tents of thousands of billions FLOPS, something unconceivable 15 years ago. But how did it come to this?
CPUs and GPUs have have evolved their computing capabilities along different lines.
Up to 2000', CPUs have incremented their computing power by increasing the clock speed. This…
Have you decided to use Tensorflow for Machine Learning applications, but you can’t make heads or tails of it? Do you need the basic principles and few elementary examples to start tinkering? This post is for you!
Simple examples to break the ice with TensorFlow will be discussed. But, first of all, some basic principles.
What is TensorFlow?
TensorFlow is an open source software library used for Machine Learning developed by Google. Its source code has been released in 2015 with an Apache 2.0 open source licence. …
Among the most common technical and scientific numerical operations, matrix multiplication occupies one of the top positions.
In Python, matrix multiplication is immediately possible using the
dot routine of
numpy library. But how performing matrix multiplications on a GPU using
PyCUDA offers the possibility of interfacing codes with already available
CUDA libraries. This is a luck since, as known,
cuBLAS enables matrix multiplications on GPUs in an extreemly effective and fast way.
A first, CUDA-like possibility
A first way to interface
PyCUDA with the
cuBLAS library is employing the
cublas module of the
scikit-cuda package. Actually, it would be…
PyCUDA is a very useful tool to embed low level programming on Graphics Processing Units (GPUs) with CUDA in a higher level programming framework provided by Python. It makes available a whole bunch of facilities to perform a step-by-step code debugging by checking intermediate variable values using breakpoints, simple prints, plots of vectors or images of matrices. In its interactive shell version, for example using Jupyter notebook, PyCUDA coding is even simpler. Jupyter is however just a possibility to be exploited locally, but free on line services also exist, like Kaggle or Google Colaboratory.
Coding with PyCUDA occurs without significant…
We are teaching, researching and consulting parallel programming on Graphics Processing Units (GPUs) since the delivery of CUDA. We also play Matlab and Python.