Explain the term paired RDD in Apache Spark.

Free Online Certification Courses – Learn Today. Lead Tomorrow. Forums Apache Spark Explain the term paired RDD in Apache Spark.

Viewing 1 reply thread
  • Author
    Posts
    • #6389
      DataFlair TeamDataFlair Team
      Spectator

      What are paired RDD?
      What do you understand by paired RDD in Spark?

    • #6390
      DataFlair TeamDataFlair Team
      Spectator

      Introduction
      Paired RDD is a distributed collection of data with the key-value pair. It is a subset of Resilient Distributed Dataset. So it has all the feature of RDD and some new feature for the key-value pair. There are many transformation operations available for Paired RDD. These operations on Paired RDD are very useful to solve many use cases that require sorting, grouping, reducing some value/function.
      Commonly used operations on paired RDD are: groupByKey() reduceByKey() countByKey() join() etc
      Creation of Paired RDD:
      val pRDD:[(String),(Int)]=sc.textFile(“path_of_your_file”)
      .flatMap(line => line.split(” “))
      .map{word=>(word,word.length)}
      Also using subString method(if we have a file with id and some value, we can create paired rdd with id as key and value as other details)

      val pRDD2[(Int),(String)]=sc.textFile(“path_of_your_file”)
      .keyBy(line=>line.subString(1,5).trim().toInt)
      .mapValues(line=>line.subString(10,30).trim())

Viewing 1 reply thread
  • You must be logged in to reply to this topic.