Learn CUDA Programming

Book Description

Explore different methods using libraries and directives, such as OpenACC, with extension to languages such as C, C++, and Python

Key Features

  • Learn parallel programming principles and practices and performance in GPU computing
  • Get to grips with distributed multi GPU programming and other approaches to GPU programming
  • Understand how GPU acceleration in deep learning models can improve their performance

Book Description

Compute Unified Device Architecture (CUDA) is NVIDIA's GPU computing and application programming interface. It's designed to work with programming languages such as C, C++, and Python. With CUDA, you can leverage a GPU's parallel computing power for a range of high-performance computing applications in the fields of science, healthcare, and deep learning.

Learn CUDA Programming will help you learn GPU parallel programming and understand its modern applications. In this book, you'll discover CUDA programming approaches for modern GPU architectures. You'll not only be guided through GPU features, tools, and APIs, you'll also learn how to analyze performance with sample parallel programming . This book will help you optimize the performance of your apps by giving insights into CUDA programming platforms with various libraries, compiler directives (OpenACC), and other languages. As you progress, you'll learn how additional computing power can be generated using multiple GPUs in a box or in multiple boxes. Finally, you'll explore how CUDA accelerates deep learning , including convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

By the end of this CUDA book, you'll be equipped with the skills you need to integrate the power of GPU computing in your applications.

What you will learn

  • Understand general GPU operations and programming patterns in CUDA
  • Uncover the difference between GPU programming and CPU programming
  • Analyze GPU application performance and implement optimization strategies
  • Explore GPU programming, profiling, and debugging tools
  • Grasp parallel programming algorithms and how to implement them
  • Scale GPU-accelerated applications with multi-GPU and multi-nodes
  • Delve into GPU programming platforms with accelerated libraries, Python, and OpenACC
  • Gain insights into deep learning accelerators in CNNs and RNNs using GPUs

Who this book is for

This beginner-level book is for programmers who want to delve into parallel computing, become part of the high-performance computing community and build modern applications. Basic C and C++ programming experience is assumed. For deep learning enthusiasts, this book covers Python InterOps, DL libraries, and practical examples on performance estimation.

Table of Contents

  1. Introduction to CUDA programming
  2. CUDA Memory Management
  3. CUDA Thread Programming: Performance Indicators and Optimization Strategies
  4. CUDA Execution model and optimization strategies
  5. CUDA Application Monitoring and Debugging
  6. Scalable Multi-GPU programming
  7. Parallel Programming Patterns in CUDA
  8. GPU accelerated Libraries and popular programming languages
  9. GPU programming using OpenACC
  10. Deep Learning Acceleration with CUDA
  11. Appendix

Book Details