Your cart is currently empty!
Data Science on the Google Cloud Platform: Implementing End-to-End Real-Time Data Pipelines: From Ingest to Machine Learning
![](https://ziontechgroup.com/wp-content/uploads/2024/12/81gT4lP7A4L._SL1500_.jpg)
Price: $64.99 – $38.11
(as of Dec 14,2024 16:59:11 UTC – Details)
From the Publisher
From the Preface
In this book, we walk through an example of this new transformative, more collaborative way of doing data science. You will learn how to implement an end-to-end data pipeline-we will begin with ingesting the data in a serverless way and work our way through data exploration, dashboards, relational databases, and streaming data all the way to training and making operational a machine learning model. I cover all these aspects of data-based services because data engineers will be involved in designing the services, developing the statistical and machine learning models and implementing them in large-scale production and in real time.
Who This Book Is For
If you use computers to work with data, this book is for you. You might go by the title of data analyst, database administrator, data engineer, data scientist, or systems programmer today. Although your role might be narrower today (perhaps you do only data analysis, or only model building, or only DevOps), you want to stretch your wings a bit-you want to learn how to create data science models as well as how to implement them at scale in production systems.
Google Cloud Platform is designed to make you forget about infrastructure. The marquee data services-Google BigQuery, Cloud Dataflow, Cloud Pub/Sub, and Cloud ML Engine-are all serverless and autoscaling. When you submit a query to BigQuery, it is run on thousands of nodes, and you get your result back; you don’t spin up a cluster or install any software. Similarly, in Cloud Dataflow, when you submit a data pipeline, and in Cloud Machine Learning Engine, when you submit a machine learning job, you can process data at scale and train models at scale without worrying about cluster management or failure recovery. Cloud Pub/Sub is a global messaging service that autoscales to the throughput and number of subscribers and publishers without any work on your part. Even when you’re running open source software like Apache Spark that’s designed to operate on a cluster, Google Cloud Platform makes it easy. Leave your data on Google Cloud Storage, not in HDFS, and spin up a job-specific cluster to run the Spark job. After the job completes, you can safely delete the cluster. Because of this job-specific infrastructure, there’s no need to fear overprovisioning hardware or running out of capacity to run a job when you need it. Plus, data is encrypted, both at rest and in transit, and kept secure. As a data scientist, not having to manage infrastructure is incredibly liberating.
The reason that you can afford to forget about virtual machines and clusters when running on Google Cloud Platform comes down to networking. The network bisection bandwidth within a Google Cloud Platform datacenter is 1 PBps, and so sustained reads off Cloud Storage are extremely fast. What this means is that you don’t need to shard your data as you would with traditional MapReduce jobs. Instead, Google Cloud Platform can autoscale your compute jobs by shuffling the data onto new compute nodes as needed. Hence, you’re liberated from cluster management when doing data science on Google Cloud Platform.
These autoscaled, fully managed services make it easier to implement data science models at scale-which is why data scientists no longer need to hand off their models to data engineers. Instead, they can write a data science workload, submit it to the cloud, and have that workload executed automatically in an autoscaled manner. At the same time, data science packages are becoming simpler and simpler. So, it has become extremely easy for an engineer to slurp in data and use a canned model to get an initial (and often very good) model up and running. With well-designed packages and easy-to-consume APIs, you don’t need to know the esoteric details of data science algorithms-only what each algorithm does, and how to link algorithms together to solve realistic problems. This convergence between data science and data engineering is why you can stretch your wings beyond your current role.
Rather than simply read this book cover-to-cover, I strongly encourage you to follow along with me by also trying out the code. The full source code for the end-to-end pipeline I build in this book is on GitHub. Create a Google Cloud Platform project and after reading each chapter, try to repeat what I did by referring to the code and to the Readme file in each folder of the GitHub repository.
Publisher : O’Reilly Media; 1st edition (February 6, 2018)
Language : English
Paperback : 402 pages
ISBN-10 : 1491974567
ISBN-13 : 978-1491974568
Item Weight : 1.44 pounds
Dimensions : 7.25 x 1 x 9.25 inches
Data Science on the Google Cloud Platform: Implementing End-to-End Real-Time Data Pipelines
In today’s fast-paced digital world, the ability to quickly and efficiently analyze large amounts of data has become essential for businesses to stay competitive. Data science is a crucial tool in this process, allowing companies to extract valuable insights from their data and make informed decisions.
Google Cloud Platform (GCP) offers a powerful set of tools and services for data science and analytics, making it easier than ever to build end-to-end real-time data pipelines. From data ingestion to machine learning, GCP provides a comprehensive suite of services to help businesses harness the power of their data.
In this post, we will walk you through the process of implementing an end-to-end real-time data pipeline on the Google Cloud Platform. We will cover the following steps:
1. Data Ingestion: We will start by ingesting data from various sources into GCP using tools like Cloud Storage, Cloud Pub/Sub, and Dataflow. These tools make it easy to collect, store, and process data in real-time.
2. Data Processing: Once the data is ingested, we will use tools like Cloud Dataflow and BigQuery to process and analyze the data. These tools allow us to run complex data transformations and queries at scale, making it easy to extract valuable insights from our data.
3. Machine Learning: Finally, we will use tools like Cloud AI Platform and TensorFlow to build and deploy machine learning models on GCP. These tools make it easy to train, test, and deploy models at scale, allowing us to make accurate predictions and automate decision-making processes.
By following these steps, you can build a robust end-to-end real-time data pipeline on the Google Cloud Platform, enabling your business to make data-driven decisions and stay ahead of the competition. So, what are you waiting for? Start harnessing the power of data science on GCP today!
#Data #Science #Google #Cloud #Platform #Implementing #EndtoEnd #RealTime #Data #Pipelines #Ingest #Machine #Learning
Leave a Reply