Ensemble Machine Learning Front Cover

Ensemble Machine Learning

  • Length: 438 pages
  • Edition: 1
  • Publisher:
  • Publication Date: 2017-12-21
  • ISBN-10: 178829775X
  • ISBN-13: 9781788297752
  • Sales Rank: #3022633 (See Top 100 Books)
Description

Ensemble Machine Learning: A beginner’s guide that combines powerful machine learning algorithms to build optimized models

An effective guide to using ensemble techniques to enhance machine learning models

Key Features

  • Learn how to maximize popular machine learning algorithms such as random forests, decision trees, AdaBoost, K-nearest neighbor, and more
  • Get a practical approach to building efficient machine learning models using ensemble techniques with real-world use cases
  • Implement concepts such as boosting, bagging, and stacking ensemble methods to improve your model prediction accuracy

Book Description

Ensembling is a technique of combining two or more similar or dissimilar machine learning algorithms to create a model that delivers superior prediction power. This book will show you how you can use many weak algorithms to make a strong predictive model. This book contains Python code for different machine learning algorithms so that you can easily understand and implement it in your own systems.

This book covers different machine learning algorithms that are widely used in the practical world to make predictions and classifications. It addresses different aspects of a prediction framework, such as data pre-processing, model training, validation of the model, and more. You will gain knowledge of different machine learning aspects such as bagging (decision trees and random forests), Boosting (Ada-boost) and stacking (a combination of bagging and boosting algorithms).

Then you’ll learn how to implement them by building ensemble models using TensorFlow and Python libraries such as scikit-learn and NumPy. As machine learning touches almost every field of the digital world, you’ll see how these algorithms can be used in different applications such as computer vision, speech recognition, making recommendations, grouping and document classification, fitting regression on data, and more.

By the end of this book, you’ll understand how to combine machine learning algorithms to work behind the scenes and reduce challenges and common problems.

What you will learn

  • Understand why bagging improves classification and regression performance
  • Get to grips with implementing AdaBoost and different variants of this algorithm
  • See the bootstrap method and its application to bagging
  • Perform regression on Boston housing data using scikit-learn and NumPy
  • Know how to use Random forest for IRIS data classification
  • Get to grips with the classification of sonar dataset using KNN, Perceptron, and Logistic Regression
  • Discover how to improve prediction accuracy by fine-tuning the model parameters
  • Master the analysis of a trained predictive model for over-fitting/under-fitting cases

Who This Book Is For

This book is for data scientists, machine learning practitioners, and deep learning enthusiasts who want to implement ensemble techniques and make a deep dive into the world of machine learning algorithms. You are expected to understand Python code and have a basic knowledge of probability theories, statistics, and linear algebra.

Table of Contents

Chapter 1. Introduction of Ensemble Learning
Chapter 2. Decision Trees
Chapter 3. Random Forest
Chapter 4. Random Subspace and KNN Bagging
Chapter 5. AdaBoost Classifier
Chapter 6. Gradient Boosting Machines
Chapter 7. XGBoost- extreme gradient boosting
Chapter 8. Stacked Generalization
Chapter 9. Stacked Generalization-Part 2
Chapter 10. Modern Day Machine Learning
Chapter 11. Appendix: Troubleshooting

To access the link, solve the captcha.
Subscribe