CUDA | 統一計算架構
What is CUDA?
Introduced by NVIDIA in 2006, CUDA (originally short for Compute Unified Device Architecture) refers to two things:
1. CUDA architecture: the massive parallel architecture of NVIDIA GPUs with hundreds or thousands of cores
2. CUDA software platform and programming model: also created by NVIDIA, it is a type of API (application program interface) used by developers to program these GPUs for general purpose processing.
Why you need CUDA?
CUDA enables developers to speed up compute-intensive applications by harnessing the power of GPUs for any parallelizable part of a computation. In contrast to prior GPU programming interfaces such as Direct3D and OpenGL, which required advanced skills in graphics programming, CUDA makes it a lot easier for developers and software engineers to implement parallel programming since it is compatible with many familiar programming languages such as C, C++ or Fortran. The developer only needs to add extensions of these languages in the form of a few basic keywords, which gives them direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels.
How is GIGABYTE helpful?
GIGABYTE's R-Series and G-Series server systems are optimized to work with CUDA-enabled NVIDIA GPGPUs (General Purpose Graphics Processing Units) such as the Tesla V100, Tesla T4 or RTX Quadro Series - a hardware solution that is ready to be used with the CUDA programming interface, which gives developers a powerful tool to enable them to process massively parallel workloads, such as in scientific simulation or Deep Neural Networks (DNN) training.