How to Install Apache Spark on Multi-Node Cluster 16


1. Objective

This Spark tutorial explains how to install Apache Spark on a multi-node cluster. This guide provides step by step instructions to deploy and configure Apache Spark on the real multi-node cluster. Once the setup and installation are done you can play with Spark and process data.

Learn, How to Install Apache Spark 2.x on Multi-Node Cluster?

2. Steps to install Apache Spark on multi-node cluster

Follow the steps given below to easily install Apache Spark on a multi-node cluster.

2.1. Recommended Platform

  • OS – Linux is supported as a development and deployment platform. You can use Ubuntu 14.04 / 16.04 or later (you can also use other Linux flavors like CentOS, Redhat, etc.). Windows is supported as a dev platform. (If you are new to Linux follow this Linux commands manual).
  • Spark – Apache Spark 2.x

For Apache Spark Installation On Multi-Node Cluster, we will be needing multiple nodes, either you can use Amazon AWS or follow this guide to setup virtual platform using VMWare player.

2.2. Install Spark on Master

I. Prerequisites

a. Add Entries in hosts file

Edit hosts file

sudo nano /etc/hosts

Now add entries of master and slaves

MASTER-IP master
SLAVE01-IP slave01
SLAVE02-IP slave02

(NOTE: In place of MASTER-IP, SLAVE01-IP, SLAVE02-IP put the value of the corresponding IP)

b. Install Java 7 (Recommended Oracle Java)
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer
c. Install Scala
sudo apt-get install scala
d. Configure SSH
i. Install Open SSH Server-Client
sudo apt-get install openssh-server openssh-client
ii. Generate Key Pairs
ssh-keygen -t rsa -P ""
iii. Configure passwordless SSH

Copy the content of .ssh/id_rsa.pub (of master) to .ssh/authorized_keys (of all the slaves as well as master)

iv. Check by SSH to all the Slaves
ssh slave01
ssh slave02

II. Install Spark

a. Download Spark

You can download the latest version of spark from http://spark.apache.org/downloads.html.

b. Untar Tarball

tar xzf spark-2.0.0-bin-hadoop2.6.tgz

(Note: All the scripts, jars, and configuration files are available in newly created directory “spark-2.0.0-bin-hadoop2.6”)

c. Setup Configuration
i. Edit .bashrc

Now edit .bashrc file located in user’s home directory and add following environment variables:

export JAVA_HOME=<path-of-Java-installation> (eg: /usr/lib/jvm/java-7-oracle/)
export SPARK_HOME=<path-to-the-root-of-your-spark-installation> (eg: /home/dataflair/spark-2.0.0-bin-hadoop2.6/)
export PATH=$PATH:$SPARK_HOME/bin

(Note: After above step restart the Terminal/Putty so that all the environment variables will come into effect)

ii. Edit spark-env.sh

Now edit configuration file spark-env.sh (in $SPARK_HOME/conf/) and set following parameters:

Note: Create a copy of template of spark-env.sh and rename it:

cp spark-env.sh.template spark-env.sh
export JAVA_HOME=<path-of-Java-installation> (eg: /usr/lib/jvm/java-7-oracle/)
export SPARK_WORKER_CORES=8
iii. Add Salves

Create configuration file slaves (in $SPARK_HOME/conf/) and add following entries:

slave01
slave02

“Apache Spark has been installed successfully on Master, now deploy Spark on all the Slaves”

2.3. Install Spark On Slaves

I. Setup Prerequisites on all the slaves

Run following steps on all the slaves (or worker nodes):

  • “1.1. Add Entries in hosts file”
  • “1.2. Install Java 7”
  • “1.3. Install Scala”

II. Copy setups from master to all the slaves

a. Create tarball of configured setup
tar czf spark.tar.gz spark-2.0.0-bin-hadoop2.6

NOTE: Run this command on Master

b. Copy the configured tarball on all the slaves
scp spark.tar.gz slave01:~

NOTE: Run this command on Master

scp spark.tar.gz slave02:~

NOTE: Run this command on Master

III. Un-tar configured spark setup on all the slaves

tar xzf spark.tar.gz

NOTE: Run this command on all the slaves

“Congratulations Apache Spark has been installed on all the Slaves. Now Start the daemons on the Cluster”

2.4. Start Spark Cluster

I. Start Spark Services

sbin/start-all.sh

Note: Run this command on Master

II. Check whether services have been started

a. Check daemons on Master
jps
Master
b. Check daemons on Slaves
jps
Worker

2.5. Spark Web UI

I. Spark Master UI

Browse the Spark UI to know about worker nodes, running application, cluster resources.

http://MASTER-IP:8080/

II. Spark application UI

http://MASTER-IP:4040/

2.6. Stop the Cluster

I. Stop Spark Services

Once all the applications have finished, you can stop the spark services (master and slaves daemons) running on the cluster

sbin/stop-all.sh

Note: Run this command on Master

After Apache Spark installation, I recommend learning Spark RDD, DataFrame, and Dataset. You can proceed further with Spark shell commands to play with Spark.

3. Conclusion

After installing the Apache Spark on the multi-node cluster you are now ready to work with Spark platform. Now you can play with the data, create an RDD, perform operations on those RDDs over multiple nodes and much more.

If you have any query to install Apache Spark, so, feel free to share with us. We will be happy to solve them.

See Also-

Reference:

http://spark.apache.org/


Leave a comment

Your email address will not be published. Required fields are marked *

16 thoughts on “How to Install Apache Spark on Multi-Node Cluster

  • Ashish Garg

    Thanks for the this great tutorial

    Don’t we need to setup the HDFS to share the repository with master and all workers?
    Can you share the tutorial for this?

  • Nitin

    Thanks for this lovely article. However, I am facing one problem when doing “jps Master” it is throwing “RMI Registry not available at Master:1099
    Connection refused to host: Master; nested exception is:
    java.net.ConnectException: Connection refused”
    this error. Can you help?

  • Krish Rajaram

    Thanks for this post. I followed these steps and successfully created the cluster with spark 2.1.0. While I was testing a simple dataframe writer, it fails to write the output file to the target path. This happens only when run through spark-submit. But when I run the commands from spark-shell the output file is successfully stored in the target path. Did anyone encounter this issue?

  • Abdel

    Hi ! thanks for this article it’s very helpful. however I did not undestand this part of your tutorial:
    2.3.3 Add salves:
    Create configuration file slaves (in $SPARK_HOME/conf/) and add following entries:
    1 slave01
    2 slave02
    Do we have to add this entries in the file spark-env.sh or what ?

    Thanks in advance

  • Swaroop P

    I followed all your steps as you mentioned.

    I am unable to connect workers. Only master is acting as master and worker form me.

    Is the above process required hadoop installation? Because i didn’t install hadoop or yarn.

    Please help me ASAP

    • Dinesh Dev Pandey

      Hi,
      I was facing the same problem. I checked the log generated for master. I found –
      “Service MasterUI is started on port 8081”.
      I tried with http://Master_IP:8081 and it worked for me.

      You can also check logs once.

  • Ugur

    Thanks for your awesome sharing,
    However, I have a problem. I setup multi-node spark according to your guidance but i cannot access with ip of master node(x.y.z.t:8080). How can i solve the problem?

    • Emiliano Amendola

      You need to add these two lines in the ~/$SPARK_HOME/conf/spark-env.sh file, in your master and worker nodes:

      export SPARK_MASTER_HOST= YOUR.MASTER.IP.ADDRESS
      export SPARK_MASTER_WEBUI_PORT=8080