Install Spark On Ubuntu- A Beginners Tutorial for Apache Spark
1. Objective – Install Spark
This tutorial describes the first step while learning Apache Spark i.e. install Spark on Ubuntu. This Apache Spark tutorial is a step by step guide for Installation of Spark, the configuration of pre-requisites and launches Spark shell to perform various operations. If you are completely new to Apache Spark, I would recommend you to read these introductory blogs- What is Spark, Spark ecosystem, Spark key abstraction RDD, Spark features, and limitations of Apache Spark.
2. Steps for Apache Spark Installation On Ubuntu
Follow the steps given below for Apache Spark Installation On Ubuntu-
i. Deployment Platform
a. Platform Requirements
- Operating System: You can use Ubuntu 14.04 or later (other Linux flavors can also be used like CentOS, Redhat, etc.)
- Spark: Apache Spark 1.6.1 or later
b. Setup Platform
If you are using Windows / Mac OS you can create a virtual machine and install Ubuntu using VMWare Player, alternatively, you can create a virtual machine and install Ubuntu using Oracle Virtual Box.
a. Install Java 7
Install Python Software Properties
$sudo apt-get install python-software-properties
$sudo add-apt-repository ppa:webupd8team/java
Update the source list
$sudo apt-get update
$sudo apt-get install oracle-java7-installer
iii. Install Apache Spark
a. Download Spark
You can download Apache Spark from the below link. In the package type please select “Pre-built for Hadoop 2.6 and Later”
Or, you can use direct download link:
b. Untar Spark Setup
$tar xzf spark-1.6.1-bin-hadoop2.6.tgz
You can find all the scripts and configuration files in the newly created directory “spark-1.6.1-bin-hadoop2.6”
c. Setup Configuration
Edit .bashrc file located in the user’s home directory and add following parameters-
export JAVA_HOME=<path-to-the-root-of-your-Java-installation> (eg: /usr/lib/jvm/java-7-oracle/) export SPARK_HOME=<path-to-the-root-of-your-spark-installation> (eg: /home/dataflair/spark-1.6.1-bin-hadoop2.6/)
iv. Launch the Spark Shell
Go to Spark home directory (spark-1.6.1-bin-hadoop2.6) and run below command to start Spark Shell
Spark shell is launched, now you can play with Spark
a. Spark UI
This is the GUI for Spark Application, in local mode spark shell runs as an application. The GUI provide details about stages, storage (cached RDDs), Environment Variables and executors
v. Spark Commands / Operations
Once you installed Apache Spark, you can play with spark shell to perform the various operation like transformation and action, the creation of RDDs. Follow this guide for Shell Commands to working with Spark.
So, this was all in the tutorial on how to install Spark in Ubuntu. hope you understand the complete process.
3. Conclusion – Spark installation
Hence, in this Spark installation tutorial, we discussed the steps to install Spark on Ubuntu. Still, if you are facing any problem, feel free to ask in the comment tab.