Return-Path: X-Original-To: apmail-incubator-ambari-commits-archive@minotaur.apache.org Delivered-To: apmail-incubator-ambari-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 77ADE1086E for ; Fri, 16 Aug 2013 21:13:03 +0000 (UTC) Received: (qmail 62071 invoked by uid 500); 16 Aug 2013 21:13:03 -0000 Delivered-To: apmail-incubator-ambari-commits-archive@incubator.apache.org Received: (qmail 62048 invoked by uid 500); 16 Aug 2013 21:13:03 -0000 Mailing-List: contact ambari-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@incubator.apache.org Delivered-To: mailing list ambari-commits@incubator.apache.org Received: (qmail 62041 invoked by uid 99); 16 Aug 2013 21:13:03 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Aug 2013 21:13:03 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 26BCA8AE817; Fri, 16 Aug 2013 21:13:03 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: swagle@apache.org To: ambari-commits@incubator.apache.org Message-Id: X-Mailer: ASF-Git Admin Mailer Subject: git commit: AMBARI-2784. Ambari memory params configuration is not right for yarn and mapreducde. (swagle) Date: Fri, 16 Aug 2013 21:13:03 +0000 (UTC) Updated Branches: refs/heads/trunk 1b47ef01f -> 202e25712 AMBARI-2784. Ambari memory params configuration is not right for yarn and mapreducde. (swagle) Project: http://git-wip-us.apache.org/repos/asf/incubator-ambari/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-ambari/commit/202e2571 Tree: http://git-wip-us.apache.org/repos/asf/incubator-ambari/tree/202e2571 Diff: http://git-wip-us.apache.org/repos/asf/incubator-ambari/diff/202e2571 Branch: refs/heads/trunk Commit: 202e2571245eb36d2d5f1c7f39352e634a7dea14 Parents: 1b47ef0 Author: Siddharth Wagle Authored: Fri Aug 16 14:11:28 2013 -0700 Committer: Siddharth Wagle Committed: Fri Aug 16 14:11:28 2013 -0700 ---------------------------------------------------------------------- .../MAPREDUCE2/configuration/mapred-site.xml | 388 +++++++++++------- .../YARN/configuration/capacity-scheduler.xml | 2 +- .../MAPREDUCE2/configuration/mapred-site.xml | 392 ++++++++++++------- 3 files changed, 493 insertions(+), 289 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-ambari/blob/202e2571/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml index 51e3e4d..5f95dc3 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml @@ -27,96 +27,88 @@ mapreduce.task.io.sort.mb 100 - No description + + The total amount of buffer memory to use while sorting files, in megabytes. + By default, gives each merge stream 1MB, which should minimize seeks. + mapreduce.map.sort.spill.percent 0.1 - No description + + The soft limit in the serialization buffer. Once reached, a thread will + begin to spill the contents to disk in the background. Note that + collection will not block if this threshold is exceeded while a spill + is already in progress, so spills may be larger than this threshold when + it is set to less than .5 + mapreduce.task.io.sort.factor 100 - No description + + The number of streams to merge at once while sorting files. + This determines the number of open file handles. + - - - mapreduce.jobtracker.system.dir - - No description - true - - - - - mapreduce.cluster.local.dir - - No description - true - - mapreduce.reduce.shuffle.parallelcopies 30 - No description - - - - mapreduce.tasktracker.map.tasks.maximum - - No description + + The default number of parallel transfers run by reduce during + the copy(shuffle) phase. + mapreduce.map.speculative false - If true, then multiple instances of some map tasks - may be executed in parallel. + + If true, then multiple instances of some map tasks + may be executed in parallel. + mapreduce.reduce.speculative false - If true, then multiple instances of some reduce tasks - may be executed in parallel. + + If true, then multiple instances of some reduce tasks may be + executed in parallel. + mapreduce.job.reduce.slowstart.completedmaps 0.05 - - - - mapreduce.reduce.merge.inmem.threshold - 1000 - The threshold, in terms of the number of files - for the in-memory merge process. When we accumulate threshold number of files - we initiate the in-memory merge and spill to disk. A value of 0 or less than - 0 indicates we want to DON'T have any threshold and instead depend only on - the ramfs's memory consumption to trigger the merge. - + + Fraction of the number of maps in the job which should be complete before + reduces are scheduled for the job. + mapreduce.reduce.shuffle.merge.percent 0.66 - The usage threshold at which an in-memory merge will be - initiated, expressed as a percentage of the total memory allocated to - storing in-memory map outputs, as defined by - mapreduce.reduce.shuffle.input.buffer.percent. - + + The usage threshold at which an in-memory merge will be + initiated, expressed as a percentage of the total memory allocated to + storing in-memory map outputs, as defined by + mapreduce.reduce.shuffle.input.buffer.percent. + mapreduce.reduce.shuffle.input.buffer.percent 0.7 - The percentage of memory to be allocated from the maximum heap - size to storing map outputs during the shuffle. - + + The percentage of memory to be allocated from the maximum heap + size to storing map outputs during the shuffle. + @@ -127,144 +119,254 @@ - - mapreduce.output.fileoutputformat.compress.type - BLOCK - If the job outputs are to compressed as SequenceFiles, how should - they be compressed? Should be one of NONE, RECORD or BLOCK. - - + + mapreduce.output.fileoutputformat.compress.type + BLOCK + + If the job outputs are to compressed as SequenceFiles, how should + they be compressed? Should be one of NONE, RECORD or BLOCK. + + mapreduce.reduce.input.buffer.percent 0.0 - The percentage of memory- relative to the maximum heap size- to - retain map outputs during the reduce. When the shuffle is concluded, any - remaining map outputs in memory must consume less than this threshold before - the reduce can begin. - + + The percentage of memory- relative to the maximum heap size- to + retain map outputs during the reduce. When the shuffle is concluded, any + remaining map outputs in memory must consume less than this threshold before + the reduce can begin. + - - mapreduce.reduce.input.limit - 10737418240 - The limit on the input size of the reduce. (This value - is 10 Gb.) If the estimated input size of the reduce is greater than - this value, job is failed. A value of -1 means that there is no limit - set. - - - mapreduce.map.output.compress - mapreduce.task.timeout 600000 - The number of milliseconds before a task will be - terminated if it neither reads an input, writes an output, nor - updates its status string. - + + The number of milliseconds before a task will be + terminated if it neither reads an input, writes an output, nor + updates its status string. + - mapred.child.java.opts - -Xmx512m - No description + mapreduce.map.memory.mb + 1536 - mapreduce.cluster.reducememory.mb - 2048 + mapreduce.reduce.memory.mb + 1024 - mapreduce.map.memory.mb - 1536 + mapreduce.tasktracker.keytab.file + + The filename of the keytab for the task tracker - mapreduce.reduce.memory.mb + mapreduce.jobhistory.keytab.file + + + The keytab for the job history server principal. + + + + mapreduce.shuffle.port + 13562 + + Default port that the ShuffleHandler will run on. + ShuffleHandler is a service run at the NodeManager to facilitate + transfers of intermediate Map outputs to requesting Reducers. + + + + + mapreduce.jobhistory.intermediate-done-dir + /mr-history/tmp + + Directory where history files are written by MapReduce jobs. + + + + + mapreduce.jobhistory.done-dir + /mr-history/done + + Directory where history files are managed by the MR JobHistory Server. + + + +      + mapreduce.jobhistory.address + localhost:10020 + Enter your JobHistoryServer hostname. + + +      + mapreduce.jobhistory.webapp.address + localhost:19888 + Enter your JobHistoryServer hostname. + + + + mapreduce.framework.name + yarn + + The runtime framework for executing MapReduce jobs. Can be one of local, + classic or yarn. + + + + + yarn.app.mapreduce.am.staging-dir + /user + + The staging dir used while submitting jobs. + + + + + yarn.app.mapreduce.am.resource.mb 1024 + The amount of memory the MR AppMaster needs. - mapreduce.jobtracker.tasktracker.maxblacklists - 16 + yarn.app.mapreduce.am.command-opts + -Xmx756m - if node is reported blacklisted by 16 successful jobs within timeout-window, it will be graylisted + Java opts for the MR App Master processes. + The following symbol, if present, will be interpolated: @taskid@ is replaced + by current TaskID. Any other occurrences of '@' will go unchanged. + For example, to enable verbose gc logging to a file named for the taskid in + /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: + -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc + + Usage of -Djava.library.path can cause programs to no longer function if + hadoop native libraries are used. These values should instead be set as part + of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and + mapreduce.reduce.env config settings. - mapreduce.tasktracker.healthchecker.script.path - + yarn.app.mapreduce.am.admin-command-opts + -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN + + Java opts for the MR App Master processes for admin purposes. + It will appears before the opts set by yarn.app.mapreduce.am.command-opts and + thus its options can be overridden user. + + Usage of -Djava.library.path can cause programs to no longer function if + hadoop native libraries are used. These values should instead be set as part + of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and + mapreduce.reduce.env config settings. + - mapreduce.tasktracker.healthchecker.script.timeout - 60000 + yarn.app.mapreduce.am.log.level + INFO + MR App Master process log level. - mapreduce.tasktracker.keytab.file + yarn.app.mapreduce.am.env - The filename of the keytab for the task tracker + + User added environment variables for the MR App Master + processes. Example : + 1) A=foo This will set the env variable A to foo + 2) B=$B:c This is inherit tasktracker's B env variable. + - - mapreduce.jobhistory.keytab.file - - - The keytab for the job history server principal. - - - - mapreduce.shuffle.port - 8081 - Default port that the ShuffleHandler will run on. ShuffleHandler is a service run at the NodeManager to facilitate transfers of intermediate Map outputs to requesting Reducers. - - - - mapreduce.jobhistory.intermediate-done-dir - /mr-history/tmp - Directory where history files are written by MapReduce jobs. - - - - mapreduce.jobhistory.done-dir - /mr-history/done - Directory where history files are managed by the MR JobHistory Server. - - -      - mapreduce.jobhistory.address - localhost:10020 - Enter your JobHistoryServer hostname. - - -      - mapreduce.jobhistory.webapp.address - localhost:19888 - Enter your JobHistoryServer hostname. - - - - mapreduce.framework.name - yarn - No description - - - - yarn.app.mapreduce.am.staging-dir - /user - - The staging dir used while submitting jobs. - - + + mapreduce.admin.map.child.java.opts + -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN + + + + mapreduce.admin.reduce.child.java.opts + -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN + + + + mapreduce.application.classpath + $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* + + CLASSPATH for MR applications. A comma-separated list of CLASSPATH + entries. + + + + + mapreduce.am.max-attempts + 2 + + The maximum number of application attempts. It is a + application-specific setting. It should not be larger than the global number + set by resourcemanager. Otherwise, it will be override. The default number is + set to 2, to allow at least one retry for AM. + + + + + mapreduce.map.memory.mb + 512 + + Larger resource limit for maps. + + + + + mapreduce.map.java.opts + -Xmx320m + + Larger heap-size for child jvms of maps. + + + + + mapreduce.reduce.memory.mb + 1024 + + Larger resource limit for reduces. + + + + + mapreduce.reduce.java.opts + -Xmx756m + + Larger heap-size for child jvms of reduces. + + + + + mapreduce.map.log.level + INFO + + The logging level for the map task. The allowed levels are: + OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. + + + + + mapreduce.reduce.log.level + INFO + + The logging level for the reduce task. The allowed levels are: + OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. + + http://git-wip-us.apache.org/repos/asf/incubator-ambari/blob/202e2571/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/YARN/configuration/capacity-scheduler.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/YARN/configuration/capacity-scheduler.xml b/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/YARN/configuration/capacity-scheduler.xml index 3f78292..ccfb779 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/YARN/configuration/capacity-scheduler.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.5/services/YARN/configuration/capacity-scheduler.xml @@ -27,7 +27,7 @@ yarn.scheduler.capacity.maximum-am-resource-percent - 0.1 + 0.2 Maximum percent of resources in the cluster which can be used to run application masters i.e. controls number of concurrent running http://git-wip-us.apache.org/repos/asf/incubator-ambari/blob/202e2571/ambari-server/src/main/resources/stacks/HDPLocal/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDPLocal/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml b/ambari-server/src/main/resources/stacks/HDPLocal/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml index 28072f7..9a555fd 100644 --- a/ambari-server/src/main/resources/stacks/HDPLocal/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml +++ b/ambari-server/src/main/resources/stacks/HDPLocal/2.0.5/services/MAPREDUCE2/configuration/mapred-site.xml @@ -22,101 +22,93 @@ - + mapreduce.task.io.sort.mb 100 - No description + + The total amount of buffer memory to use while sorting files, in megabytes. + By default, gives each merge stream 1MB, which should minimize seeks. + mapreduce.map.sort.spill.percent 0.1 - No description + + The soft limit in the serialization buffer. Once reached, a thread will + begin to spill the contents to disk in the background. Note that + collection will not block if this threshold is exceeded while a spill + is already in progress, so spills may be larger than this threshold when + it is set to less than .5 + mapreduce.task.io.sort.factor 100 - No description - - - - - - mapreduce.jobtracker.system.dir - - No description - true - - - - - mapreduce.cluster.local.dir - - No description - true + + The number of streams to merge at once while sorting files. + This determines the number of open file handles. + + mapreduce.reduce.shuffle.parallelcopies 30 - No description - - - - mapreduce.tasktracker.map.tasks.maximum - - No description + + The default number of parallel transfers run by reduce during + the copy(shuffle) phase. + mapreduce.map.speculative false - If true, then multiple instances of some map tasks - may be executed in parallel. + + If true, then multiple instances of some map tasks + may be executed in parallel. + mapreduce.reduce.speculative false - If true, then multiple instances of some reduce tasks - may be executed in parallel. + + If true, then multiple instances of some reduce tasks may be + executed in parallel. + mapreduce.job.reduce.slowstart.completedmaps 0.05 - - - - mapreduce.reduce.merge.inmem.threshold - 1000 - The threshold, in terms of the number of files - for the in-memory merge process. When we accumulate threshold number of files - we initiate the in-memory merge and spill to disk. A value of 0 or less than - 0 indicates we want to DON'T have any threshold and instead depend only on - the ramfs's memory consumption to trigger the merge. - + + Fraction of the number of maps in the job which should be complete before + reduces are scheduled for the job. + mapreduce.reduce.shuffle.merge.percent 0.66 - The usage threshold at which an in-memory merge will be - initiated, expressed as a percentage of the total memory allocated to - storing in-memory map outputs, as defined by - mapreduce.reduce.shuffle.input.buffer.percent. - + + The usage threshold at which an in-memory merge will be + initiated, expressed as a percentage of the total memory allocated to + storing in-memory map outputs, as defined by + mapreduce.reduce.shuffle.input.buffer.percent. + mapreduce.reduce.shuffle.input.buffer.percent 0.7 - The percentage of memory to be allocated from the maximum heap - size to storing map outputs during the shuffle. - + + The percentage of memory to be allocated from the maximum heap + size to storing map outputs during the shuffle. + @@ -127,144 +119,254 @@ - - mapreduce.output.fileoutputformat.compress.type - BLOCK - If the job outputs are to compressed as SequenceFiles, how should - they be compressed? Should be one of NONE, RECORD or BLOCK. - - + + mapreduce.output.fileoutputformat.compress.type + BLOCK + + If the job outputs are to compressed as SequenceFiles, how should + they be compressed? Should be one of NONE, RECORD or BLOCK. + + mapreduce.reduce.input.buffer.percent 0.0 - The percentage of memory- relative to the maximum heap size- to - retain map outputs during the reduce. When the shuffle is concluded, any - remaining map outputs in memory must consume less than this threshold before - the reduce can begin. - + + The percentage of memory- relative to the maximum heap size- to + retain map outputs during the reduce. When the shuffle is concluded, any + remaining map outputs in memory must consume less than this threshold before + the reduce can begin. + - - mapreduce.reduce.input.limit - 10737418240 - The limit on the input size of the reduce. (This value - is 10 Gb.) If the estimated input size of the reduce is greater than - this value, job is failed. A value of -1 means that there is no limit - set. - - - mapreduce.map.output.compress - mapreduce.task.timeout 600000 - The number of milliseconds before a task will be - terminated if it neither reads an input, writes an output, nor - updates its status string. - + + The number of milliseconds before a task will be + terminated if it neither reads an input, writes an output, nor + updates its status string. + - mapred.child.java.opts - -Xmx512m - No description + mapreduce.map.memory.mb + 1536 - mapreduce.cluster.reducememory.mb - 2048 + mapreduce.reduce.memory.mb + 1024 - mapreduce.map.memory.mb - 1536 + mapreduce.tasktracker.keytab.file + + The filename of the keytab for the task tracker - mapreduce.reduce.memory.mb + mapreduce.jobhistory.keytab.file + + + The keytab for the job history server principal. + + + + mapreduce.shuffle.port + 13562 + + Default port that the ShuffleHandler will run on. + ShuffleHandler is a service run at the NodeManager to facilitate + transfers of intermediate Map outputs to requesting Reducers. + + + + + mapreduce.jobhistory.intermediate-done-dir + /mr-history/tmp + + Directory where history files are written by MapReduce jobs. + + + + + mapreduce.jobhistory.done-dir + /mr-history/done + + Directory where history files are managed by the MR JobHistory Server. + + + +      + mapreduce.jobhistory.address + localhost:10020 + Enter your JobHistoryServer hostname. + + +      + mapreduce.jobhistory.webapp.address + localhost:19888 + Enter your JobHistoryServer hostname. + + + + mapreduce.framework.name + yarn + + The runtime framework for executing MapReduce jobs. Can be one of local, + classic or yarn. + + + + + yarn.app.mapreduce.am.staging-dir + /user + + The staging dir used while submitting jobs. + + + + + yarn.app.mapreduce.am.resource.mb 1024 + The amount of memory the MR AppMaster needs. - mapreduce.jobtracker.tasktracker.maxblacklists - 16 + yarn.app.mapreduce.am.command-opts + -Xmx756m - if node is reported blacklisted by 16 successful jobs within timeout-window, it will be graylisted + Java opts for the MR App Master processes. + The following symbol, if present, will be interpolated: @taskid@ is replaced + by current TaskID. Any other occurrences of '@' will go unchanged. + For example, to enable verbose gc logging to a file named for the taskid in + /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: + -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc + + Usage of -Djava.library.path can cause programs to no longer function if + hadoop native libraries are used. These values should instead be set as part + of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and + mapreduce.reduce.env config settings. - mapreduce.tasktracker.healthchecker.script.path - + yarn.app.mapreduce.am.admin-command-opts + -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN + + Java opts for the MR App Master processes for admin purposes. + It will appears before the opts set by yarn.app.mapreduce.am.command-opts and + thus its options can be overridden user. + + Usage of -Djava.library.path can cause programs to no longer function if + hadoop native libraries are used. These values should instead be set as part + of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and + mapreduce.reduce.env config settings. + - mapreduce.tasktracker.healthchecker.script.timeout - 60000 + yarn.app.mapreduce.am.log.level + INFO + MR App Master process log level. - mapreduce.tasktracker.keytab.file + yarn.app.mapreduce.am.env - The filename of the keytab for the task tracker + + User added environment variables for the MR App Master + processes. Example : + 1) A=foo This will set the env variable A to foo + 2) B=$B:c This is inherit tasktracker's B env variable. + - - mapreduce.jobhistory.keytab.file - - - The keytab for the job history server principal. - - - - mapreduce.shuffle.port - 8081 - Default port that the ShuffleHandler will run on. ShuffleHandler is a service run at the NodeManager to facilitate transfers of intermediate Map outputs to requesting Reducers. - - - - mapreduce.jobhistory.intermediate-done-dir - /mr-history/tmp - Directory where history files are written by MapReduce jobs. - - - - mapreduce.jobhistory.done-dir - /mr-history/done - Directory where history files are managed by the MR JobHistory Server. - - -      - mapreduce.jobhistory.address      - localhost:10020 - Enter your JobHistoryServer hostname. - - -      - mapreduce.jobhistory.webapp.address      - localhost:19888 - Enter your JobHistoryServer hostname. - - - - mapreduce.framework.name - yarn - No description - - - - yarn.app.mapreduce.am.staging-dir - /user - - The staging dir used while submitting jobs. - - + + mapreduce.admin.map.child.java.opts + -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN + + + + mapreduce.admin.reduce.child.java.opts + -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN + + + + mapreduce.application.classpath + $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* + + CLASSPATH for MR applications. A comma-separated list of CLASSPATH + entries. + + + + + mapreduce.am.max-attempts + 2 + + The maximum number of application attempts. It is a + application-specific setting. It should not be larger than the global number + set by resourcemanager. Otherwise, it will be override. The default number is + set to 2, to allow at least one retry for AM. + + + + + mapreduce.map.memory.mb + 512 + + Larger resource limit for maps. + + + + + mapreduce.map.java.opts + -Xmx320m + + Larger heap-size for child jvms of maps. + + + + + mapreduce.reduce.memory.mb + 1024 + + Larger resource limit for reduces. + + + + + mapreduce.reduce.java.opts + -Xmx756m + + Larger heap-size for child jvms of reduces. + + + + + mapreduce.map.log.level + INFO + + The logging level for the map task. The allowed levels are: + OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. + + + + + mapreduce.reduce.log.level + INFO + + The logging level for the reduce task. The allowed levels are: + OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. + +