Apache Sqoop Cookbook

Book Description

Integrating data from multiple sources is essential in the age of , but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational and Hadoop.

Sqoop is both powerful and bewildering, but with this cookbook’s problem-solution-discussion format, you’ll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide , Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for , Netezza, Teradata, or other relational systems.

  • Transfer data from a single database table into your Hadoop ecosystem
  • Keep table data and Hadoop in sync by importing data incrementally
  • Import data from more than one database table
  • Customize transferred data by calling various database functions
  • Export generated, processed, or backed-up data from Hadoop to your database
  • Run Sqoop within Oozie, Hadoop’s specialized workflow scheduler
  • Load data into Hadoop’s data (Hive) or database (HBase)
  • Handle installation, connection, and syntax issues common to specific database vendors

Q&A with Kathleen Ting and Jarek Jarcec Cecho, author of "Apache Sqoop Cookbook"

Q. What makes this book important right now?

A. Hadoop has quickly become the standard for processing and analyzing Big Data. In order to integrate a new Hadoop deployment into your existing environment, you will need to transfer data stored in relational databases into Hadoop. Sqoop optimizes data transfers between Hadoop and databases with a command line interface listing 60 parameters. In this book, we'll focus on applying the parameters in common use cases to help you deploy and use Sqoop in your environment.

Q. What do you hope that readers of your book will walk away with?

A. One recipe at a time, this book guides you from basic commands not requiring prior Sqoop knowledge all the way to very advanced use cases. These recipes are detailed enough not only to enable you to deploy them within your environment but also to understand Sqoop's inner workings.

Q. Can you give us a little taste of the contents?

A. Imagine a scenario where you are incrementally importing records from MySQL into Hadoop. When you resume importing and noticing that some records have been modified, you also want to include those updated records. How do you drop the older copies of records when records have been updated and then merge in the newer copies?

This sounds like a use-case for using the lastmodified incremental mode. Internally, the lastmodified import consists of two standalone MapReduce jobs. The first job will import the delta of changed data similarly to the way normal import does. This import job will save data in a temporary directory on HDFS. The second job will take both the old and new data and will merge them together into the final output, preserving only the last updated value for each row.

Here's an example:

sqoop import

--connect jdbc:mysql://mysql.example.com/sqoop

--username sqoop

--password sqoop

--table visits

--incremental lastmodified

--check-column last_update_date

--last-value "2013-05-22 01:01:01"

Table of Contents

Chapter 1. Getting Started
Chapter 2. Importing Data
Chapter 3. Incremental Import
Chapter 4. Free-Form Query Import
Chapter 5. Export
Chapter 6. Hadoop Ecosystem
Chapter 7. Specialized Connectors

Book Details

Download LinkFormatSize (MB)Upload Date
Direct download (Recommended!)PDF, EPUB4.605/05/2019
Download from EU(multi)PDF, EPUB4.603/27/2014
Download from FilePiPDF, EPUB4.604/10/2015
Download from UpLoadedPDF, EPUB4.609/16/2014
Download from ZippySharePDF, EPUB4.609/04/2017
How to Download? Report Dead Links & Get a Copy

Comments

Leave a Reply