Integrating R and Hadoop

Hello Everyone,

In this blog, I am going to discuss different ways of integrating R and Hadoop.

Firstly, let me tell you that Hadoop and R are a natural match and are quite complementary in terms of visualization and analytics of big data.

Let’s check about the outline of the ways, R and Hadoop can be integrated to scale data Analytics to Big Data Analytics. Before that I shall mention about the knowledge that I have about R and Hadoop. I was familiar with Hadoop from long before. As I have a keen interest in Big Data, the terms related to it has always fascinated me. But R is something very new to me, as it is getting integrated (in a way getting connected) to Hadoop I researched about R as well.

So what is R?

R is a language and environment for statistical computing and graphics. The term “environment” is intended to characterize, it as a fully planned and coherent system, rather than an incremental accretion of very specific and inflexible tools, as is frequently the case with other data analysis software.

R is an integrated suite of software facilities for data manipulation, calculation and graphical display. Among other things it has

  • an effective data handling and storage facility,

  • a suite of operators for calculations on arrays, in particular matrices,

  • a large, coherent, integrated collection of intermediate tools for data analysis,

  • graphical facilities for data analysis and display either directly at the computer or on hard-copy

  • a well-developed, simple and effective programming language (called ‘S’) which includes conditionals, loops, user defined recursive functions and input and output facilities. (Indeed most of the system supplied functions are themselves written in the S language.)

I find R to be very interesting as it is very much like a vehicle for newly developing methods of interactive data analysis. It has developed rapidly, and has been extended by a large collection of packages. However, most programs written in R are essentially ephemeral, written for a single piece of data analysis.

In the areas of interactive data analysis, general purpose statistics and predictive modelling, R has gained massive popularity due to its classification, clustering and ranking capabilities. The R language is widely used among statisticians and data miners for developing statistical software and performing data analysis.

It’s been quite a time I have been working on Hadoop,

Hadoop is an open-source framework that allows to store and process big data in a distributed environment across clusters of computers using simple programming models. Hadoop is written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common and should be automatically handled by the framework. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

The two main concepts associated with Hadoop are the Hadoop FileSystem (HDFS) and the MapReduce processing engine. Hadoop provides a distributed file system and a framework for the analysis and transformation of very large data sets using the MapReduce paradigm. The HDFS provides the storage whereas the MapReduce executes the programs. It implements MapReduce for scalable, reliable and distributed computing. It is a framework to support distributed processing of large datasets across clusters of computers with the help of a simple programming model. The advent of Hadoop was inspired by Google’s MapReduce and GFS papers.

Hadoop splits files into large blocks and distributes them across nodes in a cluster. To process data, Hadoop transfers packaged code for nodes to process in parallel based on the data that needs to be processed. This approach takes advantage of data locality— nodes manipulating the data they have access to— to allow the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.

Hadoop can run in three different modes- standalone, pseudo-distributed and fully distributed operation modes.

When it comes down the processing of large data sets, Hadoop’s MapReduce programming allows for the processing of such large volumes of data in a completely safe and cost-effective manner. Hadoop also triumphs over relational database management systems when it comes to the processing of large data clusters. Finally, many businesses have already realized the promise that Hadoop holds and it is imperative that its value to businesses will grow as unstructured data keeps growing.

By this we get know that Hadoop is a disruptive Java-based programming framework that supports the processing of large data sets in a distributed computing environment, while R is a programming language and software environment for statistical computing and graphics.

Now for bringing R and Hadoop together, the information that I have gathered is that Hadoop and R complement each other quite well in terms of visualization and analytics of big data.

The most common way to link R and Hadoop is to use HDFS (potentially managed by Hive or HBase) as the long-term store for all data, and use MapReduce jobs (potentially submitted from Hive, Pig, or Oozie) to encode, enrich, and sample data sets from HDFS into R. Data analysts can then perform complex modelling exercises on a subset of prepared data in R.

Let’s look up ahead on R and Hadoop integration.

There are four different ways of using Hadoop and R together:

1. RHadoop

RHadoop is a great open source solution for R and Hadoop provided by Revolution Analytics. RHadoop is bundled with four main R packages to manage and analyse the data with Hadoop framework.

RHadoop is a collection of three R packages: rmr, rhdfs and rhbase. rmr package provides Hadoop MapReduce functionality in R, rhdfs provides HDFS file management in R and rhbase provides HBase database management from within R. Each of these primary packages can be used to analyse and manage Hadoop framework data better.

2. ORCH

ORCH stands for Oracle R Connector for Hadoop. ORCH is Oracle R connector for Hadoop. ORCH can be used on the Oracle Big Data Appliance or on non-Oracle Hadoop clusters. It is a collection of R packages that provide the relevant interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables. Additionally, ORCH also provides predictive analytic techniques that can be applied to data in HDFS files.

3. RHIPE

RHIPE is an R package which provides an API to use Hadoop. RHIPE stands for R and Hadoop Integrated Programming Environment, and is essentially RHadoop with a different API. RHIPE is the R and Hadoop Integrated Programming Environment specially designed with Divide and Recombine (D&R) techniques to analyse the large datasets.

4. Hadoop streaming

Hadoop Streaming is a utility which allows users to create and run jobs with any executables as the mapper and/or the reducer. Using the streaming system, one can develop working Hadoop jobs with just enough knowledge of Java to write two shell scripts that work in tandem.

I will have to say that the combination of R and Hadoop is emerging as a must-have toolkit for people working with statistics and large data sets. However, certain Hadoop enthusiasts have raised a red flag while dealing with extremely large Big Data fragments. They claim that the advantage of R is not its syntax but the exhaustive library of primitives for visualization and statistics. These libraries are fundamentally non-distributed, making data retrieval a time-consuming affair. This is an inherent flaw with R, and if you choose to overlook it, R and Hadoop in tandem can still work wonders.

REFERENCES :

[1] http://www.inside-r.org/what-is-r

[2] https://www.r-project.org/about.html

[3] https://en.wikipedia.org/wiki/Apache_Hadoop

[4] http://www.tutorialspoint.com/hadoop/hadoop_big_data_overview.htm

[5] http://blog.revolutionanalytics.com/2015/06/using-hadoop-with-r-it-depends.html

[6] https://www.quora.com/How-can-R-and-Hadoop-be-used-together

[7] https://blog.udemy.com/r-hadoop/

[8] https://github.com/RevolutionAnalytics/RHadoop/wiki

[9] http://www.dummies.com/how-to/content/hadoop-integration-with-r.html

[10] https://bighadoop.wordpress.com/2013/02/25/r-and-hadoop-data-analysis-rhadoop/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s