Application fee : 1000 INR

Details

Certification Body: Aegis School of Data Science
Location: On-campus (India, Mumbai, Pune, Bangalore)
Type: Certificate course
Director: Dr. Vinay Kulkarni
Coordinator: Ritin Joshi
Language: English
Course fee: 25000 INR
GST: 18%
Total course fee: 29500 INR
Rating:
No Ratings

Gallery

Course Details

Spark, an alternative for fast data analytics
Although Hadoop captures the most attention for distributed data analytics, there are alternatives that provide some interesting advantages to the typical Hadoop platform. Spark is a scalable data analytics platform that incorporates primitives for in-memory computing and therefore exercises some performance advantages over Hadoop's cluster storage approach. Spark is implemented in and exploits the Scala language, which provides a unique environment for data processing. Get to know the Spark approach for cluster computing and its differences from Hadoop.

Spark is an open source cluster computing environment similar to Hadoop, but it has some useful differences that make it superior in certain workloads—namely, Spark enables in-memory distributed datasets that optimize iterative workloads in addition to interactive queries.

Spark is implemented in the Scala language and uses Scala as its application framework. Unlike Hadoop, Spark and Scala create a tight integration, where Scala can easily manipulate distributed datasets as locally collective objects.

Although Spark was created to support iterative jobs on distributed datasets, it's actually complementary to Hadoop and can run side by side over the Hadoop file system. This behavior is supported through a third-party clustering framework called Mesos. Spark was developed at the University of California, Berkeley, Algorithms, Machines, and People Lab to build large-scale and low-latency data analytics applications.


Spark cluster computing architecture
Although Spark has similarities to Hadoop, it represents a new cluster computing framework with useful differences. First, Spark was designed for a specific type of workload in cluster computing—namely, those that reuse a working set of data across parallel operations (such as machine learning algorithms). To optimize for these types of workloads, Spark introduces the concept of in-memory cluster computing, where datasets can be cached in memory to reduce their latency of access.

Spark also introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects distributed across a set of nodes. These collections are resilient, because they can be rebuilt if a portion of the dataset is lost. The process of rebuilding a portion of the dataset relies on a fault-tolerance mechanism that maintains lineage (or information that allows the portion of the dataset to be re-created based on the process from which the data was derived). An RDD is represented as a Scala object and can be created from a file; as a parallelized slice (spread across nodes); as a transformation of another RDD; and finally through changing the persistence of an existing RDD, such as requesting that it be cached in memory.

Applications in Spark are called drivers, and these drivers implement the operations performed either on a single node or in parallel across a set of nodes. Like Hadoop, Spark supports a single-node cluster or a multi-node cluster. For multi-node operation, Spark relies on the Mesos cluster manager. Mesos provides an efficient platform for resource sharing and isolation for distributed applications (see Figure 1). This setup allows Spark to coexist with Hadoop in a single shared pool of nodes.
Figure 1. Spark relies on the Mesos cluster manager for resource sharing and isolation.

Image showing the relationship among Mesos and Spark for resource sharing and isolation

Spark programming model
A driver can perform two types of operations on a dataset: an action and a transformation. An action performs a computation on a dataset and returns a value to the driver; a transformation creates a new dataset from an existing dataset. Examples of actions include performing a Reduce operation (using a function) and iterating a dataset (running a function on each element, similar to the Map operation). Examples of transformations include the Map operation and the Cache operation (which requests that the new dataset be stored in memory).

Course Content:

  • 1. Basics

    • Sub-Topic
    • Why SPARK?
    • What does it mean to learn SPARK?
    • SPARK Basics
    • Installation
    • SCALA: An Introduction
    • The SPARK Context
    • Introduction to RDDs
    • RDDs: Creation / Transformation / Actions
    • Exercises and applications

    2. Dataframes, Datasets, SQL SPARK Streaming

    • Introduction 
    • The SQL Context
    • Data I/O
    • Transformations and Actions
    • Concepts and elements of Streaming
    • Working with SPARK Streaming
    • Exercises and applications

    3. SPARK Mlib SPARK GraphX

    • Basics of Mlib
    • Statistics using Mlib
    • Machine Learning using Mlib
    • Basics of Graph Processing
    • GraphX RDDs
    • Applications of GraphX
    • Exercises and applications

    4. Case studies, applications Project Discussions

    • Case studies, applications
    • Project Discussions
    • SPARK resources
    • Trends

    5. Final Project