Hadoop is designed for batch processing.Batch processing is very efficient in the processing of high volume data.
Hadoop MapReduce is batch oriented processing tool, it takes large dataset in the input, processes it and produces a result.
Hadoop MapReduce adopted batch oriented model.Batch is essentially processing data at rest, taking a large amount of data
at once and producing output.MapReduce process is slower than spark because due to produce a lot of intermediary data.
Spark also supports batch processing system as well as stream processing.
Spark streaming processes data streams in micro batches, Micro batches are an essentially collect and then process kind of
computational model.Spark processes faster than map reduce because it caches input data in memory by RDD.
Please find more details in below link
http://data-flair.training/blogs/spark-vs-flink-vs-hadoop-comparison