Skip to content

HPC Module: CUDA

Synopsis

Adds the CUDA API to your environment.

About This Software
Official Site https://developer.nvidia.com/cuda-toolkit
Tags

Installed Versions

Version Install Date Default?
10.1 2019-08-02
11.2.1 2021-02-18
11.4.0 2021-07-30
9.0 2018-04-04
9.1 2018-04-04

Description

The NVIDIA® CUDA® Toolkit provides a development environment for creating high performance GPU-accelerated applications. With the CUDA Toolkit, you can develop, optimize and deploy your applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers. The toolkit includes GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler and a runtime library to deploy your application.

GPU-accelerated CUDA libraries enable drop-in acceleration across multiple domains such as linear algebra, image and video processing, deep learning and graph analytics. For developing custom algorithms, you can use available integrations with commonly used languages and numerical packages as well as well-published development APIs. Your CUDA applications can be deployed across all NVIDIA GPU families available on premise and on GPU instances in the cloud. Using built-in capabilities for distributing computations across multi-GPU configurations, scientists and researchers can develop applications that scale from single GPU workstations to cloud installations with thousands of GPUs.

The CUDA compiler (nvcc) defaults to g++ as it's C/C++ compiler. If you wish to use CUDA with the Intel compilers, you will need to set an envionment variable HOST_COMPILER to be equal to icpc. For example, using the default shell (BASH), you would do the following (either at the command line or in your .bashrc or .bash_profile login files):

setenv HOST_COMPILER=icpc

Category

Library Programming Software SysAdmin