Mastering Parallel Programming with R Front Cover

Mastering Parallel Programming with R

  • Length: 244 pages
  • Edition: 1
  • Publisher:
  • Publication Date: 2016-05-31
  • ISBN-10: B017XSFKFG
  • Sales Rank: #1536474 (See Top 100 Books)
Description

Master the robust features of R parallel programming to accelerate your data science computations

About This Book

  • Create R programs that exploit the computational capability of your cloud platforms and computers to the fullest
  • Become an expert in writing the most efficient and highest performance parallel algorithms in R
  • Get to grips with the concept of parallelism to accelerate your existing R programs

Who This Book Is For

This book is for R programmers who want to step beyond its inherent single-threaded and restricted memory limitations and learn how to implement highly accelerated and scalable algorithms that are a necessity for the performant processing of Big Data. No previous knowledge of parallelism is required. This book also provides for the more advanced technical programmer seeking to go beyond high level parallel frameworks.

What You Will Learn

  • Create and structure efficient load-balanced parallel computation in R, using R’s built-in parallel package
  • Deploy and utilize cloud-based parallel infrastructure from R, including launching a distributed computation on Hadoop running on Amazon Web Services (AWS)
  • Get accustomed to parallel efficiency, and apply simple techniques to benchmark, measure speed and target improvement in your own code
  • Develop complex parallel processing algorithms with the standard Message Passing Interface (MPI) using RMPI, pbdMPI, and SPRINT packages
  • Build and extend a parallel R package (SPRINT) with your own MPI-based routines
  • Implement accelerated numerical functions in R utilizing the vector processing capability of your Graphics Processing Unit (GPU) with OpenCL
  • Understand parallel programming pitfalls, such as deadlock and numerical instability, and the approaches to handle and avoid them
  • Build a task farm master-worker, spatial grid, and hybrid parallel R programs

In Detail

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources.

Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.

Style and approach

This book leads you chapter by chapter from the easy to more complex forms of parallelism. The author’s insights are presented through clear practical examples applied to a range of different problems, with comprehensive reference information for each of the R packages employed. The book can be read from start to finish, or by dipping in chapter by chapter, as each chapter describes a specific parallel approach and technology, so can be read as a standalone.

Table of Contents

Chapter 1. Simple Parallelism with R
Chapter 2. Introduction to Message Passing
Chapter 3. Advanced Message Passing
Chapter 4. Developing SPRINT, an MPI-Based R Package for Supercomputers
Chapter 5. The Supercomputer in Your Laptop
Chapter 6. The Art of Parallel Programming

To access the link, solve the captcha.