Taming Big Data with Apache Spark and Python – Hands On!

Taming-Big-Data-with-Apache-Spark-and-Python-Hands-On

Taming Big Data with Apache Spark and Python – Hands On! - 
Apache Spark tutorial with 20+ hands-on examples of analyzing large data sets on your desktop or on Hadoop with Python!

New! Updated for Spark 3, more hands-on exercises, and a stronger focus on DataFrames and Structured Streaming. “Big data” analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, EBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You’ll learn those same techniques, using your own Windows system right at home. It’s easier than you might think. Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples, and then scale them up to run on cloud computing services in this course.


What you’ll learn


  • Use DataFrames and Structured Streaming in Spark 3
  • Use the MLLib machine learning library to answer common data mining questions
  • Understand how Spark Streaming lets your process continuous streams of data in real time
  • Frame big data analysis problems as Spark problems
  • Use Amazon’s Elastic MapReduce service to run your job on a cluster with Hadoop YARN
  • Install and run Apache Spark on a desktop computer or on a cluster
  • Use Spark’s Resilient Distributed Datasets to process and analyze large data sets across many CPU’s
  • Implement iterative algorithms such as breadth-first-search using Spark
  • Understand how Spark SQL lets you work with structured data
  • Tune and troubleshoot large jobs running on a cluster
  • Share information between nodes on a Spark cluster using broadcast variables and accumulators
  • Understand how the GraphX library helps with network analysis problems

Preview this Course