Getting Started With Apache Spark….

Introduction to Spark:       sp

It is a framework for performing general data analytics on distributed computing cluster like Hadoop. It provides in memory computations for increase speed and data process over MapReduce. It runs on top of existing Hadoop cluster and access Hadoop data store (HDFS). It can also process structured data in Hive and Streaming data from HDFS, Flume, Twitter and so on.

sp1

 Figure 1: Apache Spark.

Hadoop is parallel data processing framework that has traditionally been used to run map/reduce jobs. These are long running jobs that take minutes or hours to complete. Spark has designed to run on top of Hadoop and it is an alternative to the traditional batch map/reduce model that can be used for real-time stream   data processing and fast interactive queries that finish within seconds. So, Hadoop supports both traditional map/reduce and Spark.

What is Spark used for??

  • Machine learning: As data volumes grow, machine learning approaches become more feasible and increasingly accurate. Software can be trained to identify and act upon triggers within well-understood data sets before applying the same solutions to new and unknown data.Spark’s ability to store data in memory and rapidly run repeated queries makes it well- suited to training machine learning algorithms.
  • Interactive analytics: Rather than running pre-defined queries to create static dashboards of sales or production line business analysts and data scientists increasingly want to explore their data by asking a question, viewing the result, and then either altering the initial question slightly or drilling deeper into results. This interactive query process requires systems such as Spark that are able to respond and adapt quickly.
  • Data integration: Data produced by different systems across a business is rarely clean or consistent enough to simply and easily be combined for reporting or analysis. Extract, transform, and load (ETL) processes are often used to pull data from different systems, clean and standardize it, and then load it into a separate system for analysis. Spark (and Hadoop) is increasingly being used to reduce the cost and time required for this ETL process.

Spark Ecosystem:

Spark Ecosystem comprises these components as shown in Fig: 2. these components are built on top of Spark Core Engine. Spark Core Engine allows writing raw Spark programs and Scala programs and launches them; it also allows writing Java programs before launching them. All these are being executed by Spark Core Engine.

sp2
Figure 2: Spark Ecosystem
  • Spark Streaming: Spark streaming can be used for processing the real-time streaming data. This is based on micro batch style of computing and processing. It uses the DStream which is basically a series of RDDs, to process the real-time data.
  • Spark SQL: Spark SQL provides the capability to expose the Spark datasets over JDBC API and allow running the SQL like queries on Spark data using traditional BI and visualization tools. Spark SQL allows the users to ETL their data from different formats it’s currently in (like JSON, Parquet, a Database), transform it, and expose it for ad-hoc querying.
  • Spark MLlib: MLlib is Spark’s scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives.
  • Spark GraphX: GraphX is the new (alpha) Spark API for graphs and graph-parallel computation. At a high level, GraphX extends the Spark RDD by introducing the Resilient Distributed Property Graph: a directed multi-graph with properties attached to each vertex and edge. To support graph computation, GraphX exposes a set of fundamental operators (e.g., subgraph, join Vertices, and aggregate Messages) as well as an optimized variant of the Pregel API.

Architectural Overview of Spark:

          From the architecture perspective Apache Spark is based on two key concepts; Resilient Distributed Datasets (RDD) and directed acyclic graph (DAG) execution engine. With regards to datasets, Spark supports two types of RDDs: parallelized collections that are based on existing Scala collections and Hadoop datasets that are created from the files stored on HDFS. RDDs support two kinds of operations: transformations and actions. Transformations create new datasets from the input (e.g. map or filter operations are transformations), whereas actions return a value after executing calculations on the dataset (e.g. reduce or count operations are actions).

sp3
Figure 3: Basic Architecture of Spark
  •  Spark Driver (Master):  Spark Master controls the workflow, and a Spark Worker launches executors that are responsible for executing part of the job that is submitted to the Spark Master. Driver programmer process running the main() function of the application and creating the SparkContext.
  • Cluster Manager: It is an external service for acquiring resources on the cluster. Spark requires a cluster manager and a distributed storage system. For cluster management, Spark can work with cluster management tools like Hadoop YARN, or Apache Mesos.
  • Spark worker: Any node that can run application code in the cluster is called worker node. In Spark, many tasks can run concurrently in a single process, and this process sticks around for the lifetime of the Spark application, even when no jobs are running. Executor is a process launched for an application on a worker node that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors.

Working of the Spark Architecture:

sp4
Figure 4: Working of Spark Architecture
  • As shown in Figure:4, Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).
  • Specifically, to run on a cluster, the SparkContext can connect to several types of cluster managers (either Spark’s own standalone cluster manager, Mesos or YARN), which allocate resources across applications.
  • Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application.
  • It sends your application code (defined by JAR or Python files passed to SparkContext) to the executors.
  • Finally, SparkContext sends tasks to the executors to run.

 Features of Spark:

  • Speed :

Spark enables applications in Hadoop clusters to run up to 100x faster in memory, and 10x faster even when running on disk. Spark makes it possible by reducing number of read/write to disc. It stores this intermediate processing data in-memory. It uses the concept of a Resilient Distributed Dataset (RDD), which allows it to transparently store data on memory and persist it to disc only it’s needed. This helps to reduce most of the disc read and write – the main time consuming factors – of data processing.

  • Ease of Use :

         Spark lets you quickly write applications in Java, Scala or Python. This helps developers to create and run their applications on their familiar programming languages and easy to build parallel apps. It comes with a built-in set of over 80 high-level operators. We can use it interactively to query data within the shell too.

  • Combines SQL, streaming, and complex analytics :

              In addition to simple “map” and “reduce” operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms.

  • Runs Everywhere :

            Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra and HBase.

Conclusion:

          Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, which is maintained in a fault-tolerant way.

        It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflow structure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark’s RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory.

References:

  1. http://aptuz.com/blog/is-apache-spark-going-to-replace-hadoop/
  2. https://databricks.com/spark/about
  3. http://www.tutorialspoint.com/apache_spark/apache_spark_introduction.htm
  4. https://www.cloudera.com/products/apache-hadoop/apache-spark.html
  5. http://bigdatauniversity.com/courses/spark-fundamentals/
  6. https://www.infoq.com/articles/apache-spark-introduction
  7. http://www.infoworld.com/article/3014440/big-data/five-things-you-need-to-know-about-hadoop-v-apache-spark.html

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s