- You'll learn to write data processing programs in Python that are highly available, reliable, and fault tolerant
- Make use of Amazon Web Services along with Python to establish a powerful remote computation system
- Train Python to handle data-intensive and resource hungry applications
CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications.
This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
What You Will Learn
- Get an introduction to parallel and distributed computing
- See synchronous and asynchronous programming
- Explore parallelism in Python
- Distributed application with Celery
- Python in the Cloud
- Python on an HPC cluster
- Test and debug distributed applications
About the Author
Francesco Pierfederici is a software engineer who loves Python. He has been working in the fields of astronomy, biology, and numerical weather forecasting for the last 20 years.
He has built large distributed systems that make use of tens of thousands of cores at a time and run on some of the fastest supercomputers in the world. He has also written a lot of applications of dubious usefulness but that are great fun. Mostly, he just likes to build things.
Table of Contents
Chapter 1. An Introduction to Parallel and Distributed Computing
Chapter 2. Asynchronous Programming
Chapter 3. Parallelism in Python
Chapter 4. Distributed Applications – with Celery
Chapter 5. Python in the Cloud
Chapter 6. Python on an HPC Cluster
Chapter 7. Testing and Debugging Distributed Applications
Chapter 8. The Road Ahead