Spark Shell Commands to Interact with Spark-Scala 1


1. Objective

The shell acts as an interface to access the operating system’s service. Apache Spark is shipped with an interactive shell/scala prompt with the interactive shell we can run different commands to process the data. This is an Apache Spark Shell commands guide with step by step list of basic spark commands/operations to interact with Spark shell.

Before starting you must have Spark installed. follow this guide to install Apache Spark.

After Spark installation, You can create RDDs and perform various transformations and actions like filter(), partitions(), cache(), count(), collect, etc. In this blog, we will also discuss the integration of Spark with Hadoop, how spark reads the data from HDFS and write to HDFS?.

An Apache Spark Shell Commands Tutorial to Interact with Spark-Scala

2. Scala – Spark Shell Commands

Start the Spark Shell

Apache Spark is shipped with an interactive shell/scala prompt, as the spark is developed in Scala. Using the interactive shell we will run different commands (RDD transformation/action) to process the data.

The command to start the Apache Spark Shell:

 $bin/spark-shell 

2.1. Create a new RDD

a) Read File from local filesystem and create an RDD.

scala> val data = sc.textFile("data.txt")

Note: sc is the object of SparkContext

Note: You need to create a file data.txt in Spark_Home directory

b) Create an RDD through Parallelized Collection

scala> val no = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
scala> val noData = sc.parallelize(no)

c) From Existing RDDs

scala> val newRDD = no.map(data => (data * 2))

These are three methods to create the RDD. We can use the first method, when data is already available with the external systems like a local filesystem, HDFS, HBase, Cassandra, S3, etc. One can create an RDD by calling a textFile method of Spark Context with path / URL as the argument. The second approach can be used with the existing collections and the third one is a way to create new RDD from the existing one.

2.2. Number of Items in the RDD

Count the number of items available in the RDD. To count the items we need to call an Action:

scala> data.count()

2.3. Filter Operation

Filter the RDD and create new RDD of items which contain word “DataFlair”. To filter, we need to call transformation filter, which will return a new RDD with subset of items.

scala> val DFData = data.filter(line => line.contains("DataFlair"))

2.4. Transformation and Action together

For complex requirements, we can chain multiple operations together like filter transformation and count action together:

scala> data.filter(line => line.contains("DataFlair")).count()

2.5. Read the first item from the RDD

To read the first item from the file, you can use the following command:

scala> data.first()

2.6. Read the first 5 item from the RDD

To read the first 5 item from the file, you can use the following command:

scala> data.take(5)

2.7. RDD Partitions

An RDD is made up of multiple partitions, to count the number of partitions:

scala> data.partitions.length

Note: Minimum no. of partitions in the RDD is 2 (by default). When we create RDD from HDFS file then a number of blocks will be equals to the number of partitions.

2.8. Cache the file

Caching is the optimization technique. Once we cache the RDD in the memory all future computation will work on the in-memory data, which saves disk seeks and improve the performance.

scala> data.cache()

RDD will not be cached once you run above operation, you can visit the web UI: http://localhost:4040/storage, it will be blank. RDDs are not explicitly cached once we run cache(), rather RDDs will be cached once we run the Action, which actually needs data read from the disk.

Let’s run some actions

scala> data.count()
scala> data.collect()

Now as we have run some actions on the data file, which needs to be read from the disk to perform those operations. During this process, Spark will cache the file, so that for all future operations will get the data from the memory (no need for any disk interaction). Now if we run any transformation or action it will be done in-memory and will be much faster.

2.9. Read Data from HDFS file

To read data from HDFS file we can specify complete hdfs URL like hdfs://IP:PORT/PATH

scala> var hFile = sc.textFile("hdfs://localhost:9000/inp")

2.10. Spark WordCount Program in Scala

One of the most popular operations of MapReduceWordcount. Count all the words available in the file.

scala> val wc = hFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)

Read the result on console

scala> wc.take(5)

It will display first 5 results

2.11 Write the data to HDFS file

To write the data from HFDS:

scala> wc.saveAsTextFile("hdfs://localhost:9000/out")

3. Conclusion

In conclusion, we can say that using Spark Shell commands we can create RDD (In three ways), read from RDD, and partition RDD. We can even cache the file, read and write data from and to HDFS file and perform various operation on the data using the Apache Spark Shell commands.

Now you can create your first Spark Scala project.

See Also-


Leave a comment

Your email address will not be published. Required fields are marked *

One thought on “Spark Shell Commands to Interact with Spark-Scala