PySpark Tutorial – Why PySpark is Gaining Hype among Data Scientists?
With this PySpark tutorial, we will take you to a beautiful journey which will involve various aspects of PySpark framework. And, we assure you that by the end of this journey, you will gain expertise in PySpark.
The PySpark framework is gaining high popularity in the data science field. Spark is a very useful tool for data scientists to translate the research code into production code, and PySpark makes this process easily accessible.
Without wasting any time, let’s start with our PySpark tutorial.
What is PySpark?
PySpark, released by Apache Spark community, is basically a Python API for supporting Python with Spark. By utilizing PySpark, you can work and integrate with RDD easily in Python. The library Py4j helps to achieve this feature.
There are several features of PySpark framework:
- Faster processing than other frameworks.
- Real-time computations and low latency due to in-memory processing.
- Polyglot, which means compatible with several languages like Java, Python, Scala and R.
- Powerful caching and efficient disk persistence.
- Deployment can be performed by Hadoop through Yarn.
Audience for PySpark Tutorial
- The professionals who are aspiring to make a career in programming language and also those who want to perform real-time processing through framework can go for this PySpark tutorial.
- Also, those who want to learn PySpark along with its several modules, as well as submodules, must go for this PySpark tutorial.
Prerequisites to PySpark
We assume that before learning PySpark, the readers already have basic knowledge about the programming language as well as frameworks. Also, it is recommended to have a sound knowledge of Spark, Hadoop, Scala Programming Language, HDFS as well as Python.
Learn Python Programming Language from Scratch
Factors about PySpark API
There are a few key differences between the Python and Scala APIs which we will discuss in this PySpark Tutorial:
- Since Python is dynamically typed, therefore PySpark RDDs can easily hold objects of multiple types.
- PySpark doesn’t support some API calls, like lookup and non-text input files. However, this feature will be added in future releases.
Although RDDs support the same methods as their Scala counterparts in PySpark but takes Python functions and returns Python collection types as a result. Using Python’s lambda syntax, short functions can be passed to RDD methods.
logData = sc.textFile(logFile).cache() errors = logData.filter(lambda line: "ERROR" in line)
Basically, the functions in PySpark which are defined with the def keyword; can be passed easily. And, this is very beneficial for longer functions that cannot be shown using the lambda:
def is_error(line): return "ERROR" in line errors = logData.filter(is_error)
Moreover, in enclosing scopes functions can access objects, however, to those objects modifications within RDD methods will not be propagated back:
error_keywords = ["Exception", "Error"] def is_error(line): return any(keyword in line for keyword in error_keywords) errors = logData.filter(is_error)
Also, to launch an interactive shell, PySpark fully supports interactive use, so simply run ./bin/pyspark.
Installing and Configuring PySpark
Basically, we require Python 2.6 or higher for PySpark. Moreover, by using a standard CPython interpreter in order to support Python modules that use C extensions, we can execute PySpark applications.
In addition, PySpark requires python to be available on the system PATH and use it to run programs by default. However, by setting the PYSPARK_PYTHON environment variable in conf/spark-env.sh (or .cmd on Windows), an alternate Python executable may be specified.
Moreover, including Py4J, all of PySpark’s library dependencies, are in a bundle with PySpark.
Further, using the bin/pyspark script, Standalone PySpark applications must run. Also, using the settings in conf/spark-env.sh or .cmd, it automatically configures the Java as well as Python environment. And, while it comes to the bin/pyspark package, the script automatically adds to the PYTHONPATH.
Interactive Use of PySpark
To run PySpark applications, the bin/pyspark script launches a Python interpreter. At first build Spark, then launch it directly from the command line without any options, to use PySpark interactively:
$ sbt/sbt assembly $ ./bin/pyspark
Also, to explore data interactively we can use the Python shell and moreover it is a simple way to learn the API:
words = sc.textFile("/usr/share/dict/words") words.filter(lambda w: w.startswith("spar")).take(5) [u'spar', u'sparable', u'sparada', u'sparadrap', u'sparagrass'] help(pyspark) # Show all pyspark functions
However, the bin/pyspark shell creates SparkContext that runs applications locally on a single core, by default. Further, set the MASTER environment variable, in order to connect to a non-local cluster, or also to use multiple cores.
If we want to use the bin/pyspark shell along with the standalone Spark cluster:
$ MASTER=spark://IP:PORT ./bin/pyspark
Or, to use four cores on the local machine:
$ MASTER=local ./bin/pyspark
Moreover, we can easily launch PySpark in IPython by following this PySpark tutorial. Here, IPython refers to an enhanced Python interpreter. Basically, with IPython 1.0.0, PySpark can easily work. Hence, set the IPYTHON variable to 1 at the time of running bin/pyspark, to use IPython:
$ IPYTHON=1 ./bin/pyspark
In addition, by setting IPYTHON_OPTS, we can customize the of its command.
In order to launch the IPython Notebook by using PyLab graphing support:
$ IPYTHON_OPTS="notebook --pylab inline" ./bin/pyspark
Moreover, if we set the MASTER environment variable, IPython also works on a cluster or on multiple cores.
By creating a SparkContext in our script and by running the script using bin/pyspark, we can use PySpark from standalone Python scripts.
So, using the Python API (PySpark), we will see how to write a standalone application.
Here we are creating a simple Spark application, SimpleApp1.py:
"""SimpleApp1.py""" from pyspark import SparkContext logFile = "$YOUR_SPARK_HOME/README.md" # Should be some file on your system sc = SparkContext("local", "Simple App1") logData = sc.textFile(logFile).cache() numAs = logData.filter(lambda s: 'a' in s).count() numBs = logData.filter(lambda s: 'b' in s).count() print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
Basically, this program just counts the number of lines in ‘a’ and ‘b’ in a text file. However, we need to replace $YOUR_SPARK_HOME with the Spark’s installation location. Moreover, we use a SparkContext to create RDDs, with the Scala and Java examples.
Now, using the bin/pyspark script, we can run this application:
$ cd $SPARK_HOME $ ./bin/pyspark SimpleApp1.py ... Lines with a: 46, Lines with b: 23
In the SparkContext constructor, we code deploy dependencies by listing them in the pyFiles option:
from pyspark import SparkContext sc = SparkContext("local", "App Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg'])
Moreover, all the files which are enlisted here will be further added to the PYTHONPATH and after that, it will ship to remote worker machines. Further, we can add Code dependencies to an existing SparkContext with the help of its addPyFile() method.
Also, by passing a SparkConf object to SparkContext, we can set configuration properties:
from pyspark import SparkConf, SparkContext conf = (SparkConf() .setMaster("local") .setAppName("My app") .set("spark.executor.memory", "1g")) sc = SparkContext(conf = conf)
Explore the attributes and applications of PySpark SparkConf
Comparison: Python vs Scala
Python- In terms of performance, it is slower than Scala.
Scala- Scala is 10 times faster than Python.
- Type Safety
Python- It is a dynamically typed language.
Scala- It is Statically, typed language.
- Ease of Use
Python- Comparatively, it is less verbose and also easy in use.
Scala- It is highly verbose language.
- Advanced Features
Python- For machine learning and natural language processing, Scala does not have sufficient data science tools and libraries like Python.
Scala- However, it has several existential types, macros, and implicit but still it lacks in visualizations and local data transformations.
So, this was all in PySpark tutorial. Hope you like our explanation.
Python has a rich library set that why the majority of data scientists and analytics experts use Python nowadays. Therefore, Python Spark integrating is a boon to them.
We saw the concept of PySpark framework, which helps to support Python with Spark. We also discussed PySpark meaning, use of PySpark, installation, and configurations in PySpark.
For more articles on PySpark, keep visiting DataFlair. Still, if any doubt in this PySpark tutorial, ask in the comment section.