Introduction to Apache Spark 2 Programming

Introduction to Apache Spark 2 Programming

 

Bascom  Bridge Introduction to Apache Spark 2 training provides students with a solid technical introduction to the Spark architecture and how Spark works. Attendees learn the basic building blocks of Spark, including RDDs and the distributed compute engine, as well as higher-level constructs that provide a simpler and more capable interface, including Spark SQL and DataFrames.

This course also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and provides an overview of Spark Graph Processing (GraphX and GraphFrames) and Spark Machine Learning (SparkML Pipelines). Finally, the class explores possible performance issues, troubleshooting, cluster deployment techniques, and strategies for optimization.

LOCATION AND PRICING

Most Accelebrate courses are delivered as private, customized, on-site training at our clients’ locations worldwide for groups of 3 or more attendees and are custom tailored to their specific needs. Please visit our client list to see organizations for whom we have delivered private in-house training. These courses can also be delivered as live, private online classes for groups that are geographically dispersed or wish to save on the instructor’s or students’ travel expenses. To receive a customized proposal and price quote for private training at your site or online, please contact us.

In addition, some courses are available as live, online classes for individuals.

APACHE SPARK 2 TRAINING OBJECTIVES

All students will:

  • Understand the need for Spark in data processing
  • Understand the Spark architecture and how it distributes computations to cluster nodes
  • Be familiar with basic installation / setup / layout of Spark
  • Use the Spark for interactive and ad-hoc operations
  • Use Dataset/DataFrame/Spark SQL to efficiently process structured data
  • Understand basics of RDDs (Resilient Distributed Datasets), and data partitioning, pipelining, and computations
  • Understand Spark’s data caching and its usage
  • Understand performance implications and optimizations when using Spark
  • Be familiar with Spark Graph Processing and SparkML machine learning

APACHE SPARK 2 TRAINING PREREQUISITES

Students should have an introductory knowledge of Python or Scala. An overview of Scala is provided if needed. (Class can be customized for SQL data analysts, emphasizing SQL techniques and minimizing procedural coding.)

APACHE SPARK 2 TRAINING MATERIALS

All Spark training students receive comprehensive courseware.

SOFTWARE NEEDED FOR EACH PC:

  • Windows, Mac, or Linux PCs with the current Chrome or Firefox browser.
    • Most class activities will create Spark code and visualizations in a browser-based notebook environment, the class also details how these notebooks can be exported, and how to run Spark code outside of this environment.
  • Internet access
  • For classes delivered online, all participants need either dual monitors or a separate device logged into the online session so that they can do their work on one screen and watch the instructor on the other. A separate computer connected to a projector or large screen TV would be another way for students to see the instructor’s screen simultaneously with working on their own.

APACHE SPARK 2 TRAINING OUTLINE

  • Scala Ramp Up (Optional)
    • Scala Introduction, Variables, Data Types, Control Flow
    • The Scala Interpreter
    • Collections and their Standard Methods (e.g. map())
    • Functions, Methods, Function Literals
    • Class, Object, Trait
  • Introduction to Spark
    • Overview, Motivations, Spark Systems
    • Spark Ecosystem
    • Spark vs. Hadoop
    • Typical Spark Deployment and Usage Environments
  • RDDs and Spark Architecture
    • RDD Concepts, Partitions, Lifecycle, Lazy Evaluation
    • Working with RDDs – Creating and Transforming (map, filter, etc.)
    • Caching – Concepts, Storage Type, Guidelines
  • DataSets/DataFrames and Spark SQL
    • Introduction and Usage
    • Creating and Using a DataSet
    • Working with JSON
    • Using the DataSet DSL
    • Using SQL with Spark
    • Data Formats
    • Optimizations: Catalyst and Tungsten
    • DataSets vs. DataFrames vs. RDDs
  • Creating Spark Applications
    • Overview, Basic Driver Code, SparkConf
    • Creating and Using a SparkContext/SparkSession
    • Building and Running Applications
    • Application Lifecycle
    • Cluster Managers
    • Logging and Debugging
  • Spark Streaming
    • Overview and Streaming Basics
    • Structured Streaming
    • DStreams (Discretized Steams),
    • Architecture, Stateless, Stateful, and Windowed Transformations
    • Spark Streaming API
    • Programming and Transformations
  • Performance Characteristics and Tuning
    • The Spark UI
    • Narrow vs. Wide Dependencies
    • Minimizing Data Processing and Shuffling
    • Caching – Concepts, Storage Type, Guidelines
    • Using Caching
    • Using Broadcast Variables and Accumulators
  • (Optional): Spark GraphX Overview
    • Introduction
    • Constructing Simple Graphs
    • GraphX API
    • Shortest Path Example
  • (Optional): MLLib Overview
    • Introduction
    • Feature Vectors
    • Clustering / Grouping, K-Means
    • Recommendations
    • Classifications
  • Conclusion

Send a Comment

Your email address will not be published.

CONTACT US

+91 9376007676  

INQUIRY NOW


,

Introduction to Apache Spark 2 Programming

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...
  • Course No : SPRK-100
  •  Theory : 40%
  •  Lab : 60%
  • Duration : 18 hours

PREREQUISITES:

Students should have an introductory knowledge of Python or Scala. An overview of Scala is provided if needed. (Class can be customized for SQL data analysts, emphasizing SQL techniques and minimizing procedural coding.)

Scroll Up
Skip to toolbar