PySpark Tutorial – Why PySpark is Gaining Hype among Data Scientists?

Boost your career with Free Big Data Courses!!

With this PySpark tutorial, we will take you to a beautiful journey which will involve various aspects of PySpark framework. And, we assure you that by the end of this journey, you will gain expertise in PySpark.

The PySpark framework is gaining high popularity in the data science field. Spark is a very useful tool for data scientists to translate the research code into production code, and PySpark makes this process easily accessible.

Without wasting any time, let’s start with our PySpark tutorial.

What is PySpark?

PySpark, released by Apache Spark community, is basically a Python API for supporting Python with Spark. By utilizing PySpark, you can work and integrate with RDD easily in Python. The library Py4j helps to achieve this feature.

There are several features of PySpark framework:

  1. Faster processing than other frameworks.
  2. Real-time computations and low latency due to in-memory processing.
  3. Polyglot, which means compatible with several languages like Java, Python, Scala and R.
  4. Powerful caching and efficient disk persistence.
  5. Deployment can be performed by Hadoop through Yarn.

Audience for PySpark Tutorial

  • The professionals who are aspiring to make a career in programming language and also those who want to perform real-time processing through framework can go for this PySpark tutorial.
  • Also, those who want to learn PySpark along with its several modules, as well as submodules, must go for this PySpark tutorial.

Prerequisites to PySpark

We assume that before learning PySpark, the readers already have basic knowledge about the programming language as well as frameworks. Also, it is recommended to have a sound knowledge of Spark, Hadoop, Scala Programming Language, HDFS as well as Python.

Factors about PySpark API

There are a few key differences between the Python and Scala APIs which we will discuss in this PySpark Tutorial:

  • Since Python is dynamically typed, therefore PySpark RDDs can easily hold objects of multiple types.
  • PySpark doesn’t support some API calls, like lookup and non-text input files. However, this feature will be added in future releases.

Although RDDs support the same methods as their Scala counterparts in PySpark but takes Python functions and returns Python collection types as a result. Using Python’s lambda syntax, short functions can be passed to RDD methods.

logData = sc.textFile(logFile).cache()
errors = logData.filter(lambda line: "ERROR" in line)

Basically, the functions in PySpark which are defined with the def keyword;  can be passed easily. And, this is very beneficial for longer functions that cannot be shown using the lambda: 

def is_error(line):
   return "ERROR" in line
errors = logData.filter(is_error)

Technology is evolving rapidly!
Stay updated with DataFlair on WhatsApp!!

Moreover, in enclosing scopes functions can access objects, however, to those objects modifications within RDD methods will not be propagated back:

error_keywords = ["Exception", "Error"]
def is_error(line):
   return any(keyword in line for keyword in error_keywords)
errors = logData.filter(is_error)

Also, to launch an interactive shell, PySpark fully supports interactive use, so simply run ./bin/pyspark.

Installing and Configuring PySpark

Basically, we require Python 2.6 or higher for PySpark. Moreover, by using a standard CPython interpreter in order to support Python modules that use C extensions, we can execute PySpark applications.

In addition, PySpark requires python to be available on the system PATH and use it to run programs by default. However, by setting the PYSPARK_PYTHON environment variable in conf/spark-env.sh (or .cmd on Windows), an alternate Python executable may be specified.

Moreover, including Py4J, all of PySpark’s library dependencies, are in a bundle with PySpark.

Further, using the bin/pyspark script, Standalone PySpark applications must run. Also, using the settings in conf/spark-env.sh or .cmd, it automatically configures the Java as well as Python environment. And, while it comes to the bin/pyspark package, the script automatically adds to the PYTHONPATH.

Interactive Use of PySpark

To run PySpark applications, the bin/pyspark script launches a Python interpreter. At first build Spark, then launch it directly from the command line without any options, to use PySpark interactively:

$ sbt/sbt assembly
$ ./bin/pyspark

Also, to explore data interactively we can use the Python shell and moreover it is a simple way to learn the API:

words = sc.textFile("/usr/share/dict/words")
words.filter(lambda w: w.startswith("spar")).take(5)
[u'spar', u'sparable', u'sparada', u'sparadrap', u'sparagrass']
help(pyspark) # Show all pyspark functions

However, the bin/pyspark shell creates SparkContext that runs applications locally on a single core, by default. Further, set the MASTER environment variable, in order to connect to a non-local cluster, or also to use multiple cores.

For example:

If we want to use the bin/pyspark shell along with the standalone Spark cluster:

$ MASTER=spark://IP:PORT ./bin/pyspark

Or, to use four cores on the local machine:

$ MASTER=local[4] ./bin/pyspark

IPython

Moreover, we can easily launch PySpark in IPython by following this PySpark tutorial. Here, IPython refers to an enhanced Python interpreter. Basically, with IPython 1.0.0, PySpark can easily work. Hence, set the IPYTHON variable to 1 at the time of running bin/pyspark, to use IPython:

$ IPYTHON=1 ./bin/pyspark

In addition, by setting IPYTHON_OPTS, we can customize the of its command.

For example:

In order to launch the IPython Notebook by using PyLab graphing support:

$ IPYTHON_OPTS="notebook --pylab inline" ./bin/pyspark

Moreover, if we set the MASTER environment variable, IPython also works on a cluster or on multiple cores.

Standalone Programs

By creating a SparkContext in our script and by running the script using bin/pyspark, we can use PySpark from standalone Python scripts.

So, using the Python API (PySpark), we will see how to write a standalone application.

For example

Here we are creating a simple Spark application, SimpleApp1.py:

"""SimpleApp1.py"""
from pyspark import SparkContext
logFile = "$YOUR_SPARK_HOME/README.md"  # Should be some file on your system
sc = SparkContext("local", "Simple App1")
logData = sc.textFile(logFile).cache()
numAs = logData.filter(lambda s: 'a' in s).count()
numBs = logData.filter(lambda s: 'b' in s).count()
print "Lines with a: %i, lines with b: %i" % (numAs, numBs)

Basically, this program just counts the number of lines in ‘a’ and ‘b’ in a text file. However, we need to replace $YOUR_SPARK_HOME with the Spark’s installation location. Moreover, we use a SparkContext to create RDDs, with the Scala and Java examples.

Now, using the bin/pyspark script, we can run this application:

$ cd $SPARK_HOME
$ ./bin/pyspark SimpleApp1.py
...
Lines with a: 46, Lines with b: 23

In the SparkContext constructor, we code deploy dependencies by listing them in the pyFiles option:

from pyspark import SparkContext
sc = SparkContext("local", "App Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg'])

Moreover, all the files which are enlisted here will be further added to the PYTHONPATH and after that, it will ship to remote worker machines. Further, we can add Code dependencies to an existing SparkContext with the help of its addPyFile() method.

Also, by passing a SparkConf object to SparkContext, we can set configuration properties:

from pyspark import SparkConf, SparkContext
conf = (SparkConf()
        .setMaster("local")
        .setAppName("My app")
        .set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf)

Comparison: Python vs Scala

PySpark Tutorial

  • Performance

Python- In terms of performance, it is slower than Scala.

Scala- Scala is 10 times faster than Python.

  • Type Safety

Python- It is a dynamically typed language.

Scala- It is Statically, typed language.

  • Ease of Use

Python- Comparatively, it is less verbose and also easy in use.

Scala- It is highly verbose language.

  • Advanced Features

Python- For machine learning and natural language processing, Scala does not have sufficient data science tools and libraries like Python.

Scala- However, it has several existential types, macros, and implicit but still it lacks in visualizations and local data transformations.

So, this was all in PySpark tutorial. Hope you like our explanation.

Summary

Python has a rich library set that why the majority of data scientists and analytics experts use Python nowadays. Therefore, Python Spark integrating is a boon to them.

We saw the concept of PySpark framework, which helps to support Python with Spark. We also discussed PySpark meaning, use of PySpark, installation, and configurations in PySpark.

For more articles on PySpark, keep visiting DataFlair. Still, if any doubt in this PySpark tutorial, ask in the comment section.

Did we exceed your expectations?
If Yes, share your valuable feedback on Google

courses

DataFlair Team

The DataFlair Team provides industry-driven content on programming, Java, Python, C++, DSA, AI, ML, data Science, Android, Flutter, MERN, Web Development, and technology. Our expert educators focus on delivering value-packed, easy-to-follow resources for tech enthusiasts and professionals.

7 Responses

  1. Kuldeep Chouhan says:

    Hi,
    do you guys offer Pyspark course?

    • Data Flair says:

      Hello Kuldeep,
      Thank you for referring PySpark Tutorial. Hope this article cleared all your PySpark Concepts.
      You have asked for PySpark Course. Currently, we don’t have such a course, but you can take help of our published blogs on PySpark tutorial. We already published a complete course tutorial for PySpark which contains all the topics. After completing this course, you can also prepare yourself with our blog on PySpark Interview Questions.
      Try this program once, then share your feedback with us.

  2. Shwata says:

    Very well written and esy to understand ..even it gives compete detailed information .. learnt and enjoyed alot

    • Data Flair says:

      Thanks, Shwata for nicely reviewing us. Your words on PySpark Tutorial means a lot to us. Glad to see that you enjoyed this PySpark Tutorial. You can follow our more Python articles surely you will enjoy them.
      If you want to learn any more topic related to PySpark, freely share with us. We will happy to help you.

  3. Dhanu says:

    What is the difference between Pyspark standalone API and pyspark with HDFS and YARN..?

  4. Sagar Solanki says:

    Amazing content. Easy to understand and impactful. Great job DataFlair team. I am using your blogs for virtually everything. Now my google search contains “DataFlair” in everything. It’s all about quality and building trust.
    Just one request, I hope I am not putting ideas in your head but please don’t monetize the content on your blog and keep letting us access it for free. Kudos!
    Once again, thank you so much.

    • DataFlair Team says:

      Hey Sagar Solanki,

      We are happy that you liked our articles. And don’t you worry DataFlair is here to help others for providing knowledge and enhancing skills. Our tutorials are and will be free for everyone. You can also follow us on Facebook for recent updates.

Leave a Reply

Your email address will not be published. Required fields are marked *