This topic contains 1 reply, has 1 voice, and was last updated by  dfbdteam5 10 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #6389

    dfbdteam5
    Moderator

    What are paired RDD?
    What do you understand by paired RDD in Spark?

    #6390

    dfbdteam5
    Moderator

    Introduction
    Paired RDD is a distributed collection of data with the key-value pair. It is a subset of Resilient Distributed Dataset. So it has all the feature of RDD and some new feature for the key-value pair. There are many transformation operations available for Paired RDD. These operations on Paired RDD are very useful to solve many use cases that require sorting, grouping, reducing some value/function.
    Commonly used operations on paired RDD are: groupByKey() reduceByKey() countByKey() join() etc
    Creation of Paired RDD:
    val pRDD:[(String),(Int)]=sc.textFile(“path_of_your_file”)
    .flatMap(line => line.split(” “))
    .map{word=>(word,word.length)}
    Also using subString method(if we have a file with id and some value, we can create paired rdd with id as key and value as other details)

    val pRDD2[(Int),(String)]=sc.textFile(“path_of_your_file”)
    .keyBy(line=>line.subString(1,5).trim().toInt)
    .mapValues(line=>line.subString(10,30).trim())

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.