Hadoop: Data Processing and Modelling Front Cover

Hadoop: Data Processing and Modelling

Description

Unlock the power of your data with Hadoop 2.X ecosystem and its data warehousing techniques across large data sets

About This Book

  • Conquer the mountain of data using Hadoop 2.X tools
  • The authors succeed in creating a context for Hadoop and its ecosystem
  • Hands-on examples and recipes giving the bigger picture and helping you to master Hadoop 2.X data processing platforms
  • Overcome the challenging data processing problems using this exhaustive course with Hadoop 2.X

Who This Book Is For

This course is for Java developers, who know scripting, wanting a career shift to Hadoop – Big Data segment of the IT industry. So if you are a novice in Hadoop or an expert, this book will make you reach the most advanced level in Hadoop 2.X.

What You Will Learn

  • Best practices for setup and configuration of Hadoop clusters, tailoring the system to the problem at hand
  • Integration with relational databases, using Hive for SQL queries and Sqoop for data transfer
  • Installing and maintaining Hadoop 2.X cluster and its ecosystem
  • Advanced Data Analysis using the Hive, Pig, and Map Reduce programs
  • Machine learning principles with libraries such as Mahout and Batch and Stream data processing using Apache Spark
  • Understand the changes involved in the process in the move from Hadoop 1.0 to Hadoop 2.0
  • Dive into YARN and Storm and use YARN to integrate Storm with Hadoop
  • Deploy Hadoop on Amazon Elastic MapReduce and Discover HDFS replacements and learn about HDFS Federation

In Detail

As Marc Andreessen has said “Data is eating the world,” which can be witnessed today being the age of Big Data, businesses are producing data in huge volumes every day and this rise in tide of data need to be organized and analyzed in a more secured way. With proper and effective use of Hadoop, you can build new-improved models, and based on that you will be able to make the right decisions.

The first module, Hadoop beginners Guide will walk you through on understanding Hadoop with very detailed instructions and how to go about using it. Commands are explained using sections called “What just happened” for more clarity and understanding.

The second module, Hadoop Real World Solutions Cookbook, 2nd edition, is an essential tutorial to effectively implement a big data warehouse in your business, where you get detailed practices on the latest technologies such as YARN and Spark.

Big data has become a key basis of competition and the new waves of productivity growth. Hence, once you get familiar with the basics and implement the end-to-end big data use cases, you will start exploring the third module, Mastering Hadoop.

So, now the question is if you need to broaden your Hadoop skill set to the next level after you nail the basics and the advance concepts, then this course is indispensable.

When you finish this course, you will be able to tackle the real-world scenarios and become a big data expert using the tools and the knowledge based on the various step-by-step tutorials and recipes.

Style and approach

This course has covered everything right from the basic concepts of Hadoop till you master the advance mechanisms to become a big data expert. The goal here is to help you learn the basic essentials using the step-by-step tutorials and from there moving toward the recipes with various real-world solutions for you. It covers all the important aspects of Hadoop from system designing and configuring Hadoop, machine learning principles with various libraries with chapters illustrated with code fragments and schematic diagrams. This is a compendious course to explore Hadoop from the basics to the most advanced techniques available in Hadoop 2.X.

Table of Contents

1. Module 1
1. What It’s All About
2. Getting Hadoop Up and Running
3. Understanding MapReduce
4. Developing MapReduce Programs
5. Advanced MapReduce Techniques
6. When Things Break
7. Keeping Things Running
8. A Relational View on Data with Hive
9. Working with Relational Databases
10. Data Collection with Flume
11. Where to Go Next

2. Module 2
1. Getting Started with Hadoop 2.X
2. Exploring HDFS
3. Mastering Map Reduce Programs
4. Data Analysis Using Hive, Pig, and Hbase
5. Advanced Data Analysis Using Hive
6. Data Import/Export Using Sqoop and Flume
7. Automation of Hadoop Tasks Using Oozie
8. Machine Learning and Predictive Analytics Using Mahout and R
9. Integration with Apache Spark
10. Hadoop Use Cases

3. Module 3
1. Hadoop 2.X
2. Advanced MapReduce
3. Advanced Pig
4. Advanced Hive
5. Serialization and Hadoop I/O
6. YARN – Bringing Other Paradigms to Hadoop
7. Storm on YARN – Low Latency Processing in Hadoop
8. Hadoop on the Cloud
9. HDFS Replacements
10. HDFS Federation
11. Hadoop Security
12. Analytics Using Hadoop

To access the link, solve the captcha.