Fast Data Processing with Spark Front Cover

Fast Data Processing with Spark

  • Length: 120 pages
  • Edition: 1
  • Publisher:
  • Publication Date: 2013-10-23
  • ISBN-10: 1782167064
  • ISBN-13: 9781782167068
  • Sales Rank: #3523441 (See Top 100 Books)
Description

High-speed distributed computing made easy with Spark

Overview

  • Implement Spark’s interactive shell to prototype distributed applications
  • Deploy Spark jobs to various clusters such as Mesos, EC2, Chef, YARN, EMR, and so on
  • Use Shark’s SQL query-like syntax with Spark

In Detail

Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.

Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.

Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.

We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).

What you will learn from this book

  • Prototype distributed applications with Spark’s interactive shell
  • Learn different ways to interact with Spark’s distributed representation of data (RDDs)
  • Load data from the various data sources
  • Query Spark with a SQL-like query syntax
  • Integrate Shark queries with Spark programs
  • Effectively test your distributed software
  • Tune a Spark installation
  • Install and set up Spark on your cluster
  • Work effectively with large data sets

Approach

This book will be a basic, step-by-step tutorial, which will help readers take advantage of all that Spark has to offer.

Who this book is written for

Fast Data Processing with Spark is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too much to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

Table of Contents

Chapter 1: Installing Spark and Setting Up Your Cluster
Chapter 2: Using the Spark Shell
Chapter 3: Building and Running a Spark Application
Chapter 4: Creating a SparkContext
Chapter 5: Loading and Saving Data in Spark
Chapter 6: Manipulating Your RDD
Chapter 7: Shark – Using Spark with Hive
Chapter 8: Testing
Chapter 9: Tips and Tricks

To access the link, solve the captcha.