- 1. Objective
- 2. What is Hadoop OutputFormat?
- 3. Types of OutputFormat in MapReduce
- 4. Conclusion
The Hadoop OutputFormat checks the Output-Specification of the job. It determines how RecordWriter implementation is used to write output to output files. In this blog, we are going to see what is Hadoop OutputFormat, what is Hadoop RecordWriter, how RecordWriter is used in Hadoop?
We will also discuss various types of OutputFormat in Hadoop like textOutputFormat, sequenceFileOutputFormat, mapFileOutputFormat, sequenceFileAsBinaryOutputFormat, DBOutputFormat, LazyOutputForma, and MultipleOutputs in this blog.
2. What is Hadoop OutputFormat?
Before we start with Hadoop OutputFormat in MapReduce, let us first see what is a RecordWriter in MapReduce and what is its role in MapReduce?
2.1. Hadoop RecordWriter
RecordWriter writes these output key-value pairs from the Reducer phase to output files.
2.2. Hadoop OutputFormat
As we saw above, Hadoop RecordWriter takes output data from Reducer and writes this data to output files. The way these output key-value pairs are written in output files by RecordWriter is determined by the OutputFormat. The OutputFormat and InputFormat functions are alike. OutputFormat instances provided by Hadoop are used to write to files on the HDFS or local disk. OutputFormat describes the output-specification for a Map-Reduce job. On the basis of output specification;
- MapReduce job checks that the output directory does not already exist.
- OutputFormat provides the RecordWriter implementation to be used to write the output files of the job. Output files are stored in a FileSystem.
FileOutputFormat.setOutputPath() method is used to set the output directory. Every Reducer writes a separate file in a common output directory.
3. Types of OutputFormat in MapReduce
There are various types of Hadoop OutputFormat. Let us see some of them below:
MapReduce default OutputFormat is TextOutputFormat, which writes (key, value) pairs on individual lines of text files and its keys and values can be of any type since TextOutputFormat turns them to string by calling toString() on them. Each key-value pair is separated by a tab character, which can be changed using MapReduce.output.textoutputformat.separator property. KeyValueTextOutputFormat is used for reading these output text files since it breaks lines into key-value pairs based on a configurable separator.
It is an OutputFormat which writes sequences files for its output and it is intermediate format use between MapReduce jobs, which rapidly serialize arbitrary data types to the file; and the corresponding SequenceFileInputFormat will deserialize the file into the same types and presents the data to the next mapper in the same manner as it was emitted by the previous reducer, since these are compact and readily compressible. Compression is controlled by the static methods on SequenceFileOutputFormat.
It is another form of SequenceFileInputFormat which writes keys and values to sequence file in binary format.
It is another form of FileOutputFormat in Hadoop, which is used to write output as map files. The key in a MapFile must be added in order, so we need to ensure that reducer emits keys in sorted order.
It allows writing data to files whose names are derived from the output keys and values, or in fact from an arbitrary string.
Sometimes FileOutputFormat will create output files, even if they are empty. LazyOutputFormat is a wrapper OutputFormat which ensures that the output file will be created only when the record is emitted for a given partition.
DBOutputFormat in Hadoop is an OutputFormat for writing to relational databases and HBase. It sends the reduce output to a SQL table. It accepts key-value pairs, where the key has a type extending DBwritable. Returned RecordWriter writes only the key to the database with a batch SQL query.
Hence, these types of Hadoop OutputFormat check the Output-Specification of the job. In the next session, we will discuss Hadoop InputSplits in detail. if you have any doubt related to Hadoop OutputFormat so please let us know in the comment box. We will be happy to solve your queries.