Explain Apache Hadoop's MapReduce .

Apache Hadoop's MapReduce 

  • Apache Hadoop's MapReduce is the most widely used batch data processing. The diagram below explains in detail how Hadoop processes data using MapReduce.


  • MapReduce Hadoop MapReduce is a Java-based system for processing large datasets. It reads data from the HDFS and divides the dataset into smaller pieces. Each piece is then scheduled and distributed for processing among the nodes available in the Hadoop cluster. Each node performs the required computation on the chunk of data and is intermediate results obtained are written back to the HDFS. These intermediate outputs may then be sembled, split, and redistributed for further processing until final results are written back to HDFS.
  • As already discussed above, the MapReduce data processing programming model consists of two different jobs executed by programs: a Map job and a Reduce job. Typically, the Map operation begins by turning a collection of data into another set of data in which individual pieces of the data are broken down into tuples consisting of key-value pairs. One or more Map tasks can then shuffle, sort, and process these key-value pairs. The Reduce task typically takes as input the results of a Map task and merges those data tuples into a smaller collection of tuples.
  • Batch processing, in a nutshell, is a way of waiting and performing everything periodically such as at the end of the day, week, or month. In the enterprise, during the specified period, the cumulative data will be large. So, to handle such big data, distributed computing environment, and the MapReduce technique can play a vital role.

Comments

Popular posts from this blog

Suppose that a data warehouse for Big-University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg_grade. When at the lowest conceptual level (e.g., for a given student, course, semester, and instructor combination), the avg_grade measure stores the actual course grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given combination. a) Draw a snowflake schema diagram for the data warehouse. b) Starting with the base cuboid [student, course, semester, instructor], what specific OLAP operations (e.g., roll-up from semester to year) should one perform in order to list the average grade of CS courses for each BigUniversity student. c) If each dimension has five levels (including all), such as “student < major < status < university < all”, how many cuboids will this cube contain (including the base and apex cuboids)?

Suppose that a data warehouse consists of the four dimensions; date, spectator, location, and game, and the two measures, count and charge, where charge is the fee that a spectator pays when watching a game on a given date. Spectators may be students, adults, or seniors, with each category having its own charge rate. a) Draw a star schema diagram for the data b) Starting with the base cuboid [date; spectator; location; game], what specific OLAP operations should perform in order to list the total charge paid by student spectators at GM Place in 2004?

Discuss classification or taxonomy of virtualization at different levels.