hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ch huang <justlo...@gmail.com>
Subject issure about MR job on yarn framework
Date Tue, 03 Dec 2013 02:07:39 GMT
              i run a job on my CDH4.4 yarn framework ,it's map task
finished very fast,but reduce is very slow, i check it use ps command
find it's work heap size is 200m,so i try to increase heap size used by
reduce task,i add "YARN_OPTS="$YARN_OPTS
-Dmapreduce.reduce.java.opts=-Xmx1024m -verbose:gc -XX:+PrintGCDetails
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=15M
-XX:-UseGCOverheadLimit"    in yarn-env.sh file ,but when i restart the
nodemanager ,i find new reduce task still use 200m heap ,why?

# jps
2853 DataNode
19533 Jps
10949 YarnChild
10661 NodeManager
15130 HRegionServer
# ps -ef|grep 10949
yarn     10949 10661 99 09:52 ?        00:19:31
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 48936
attempt_1385983958793_0022_r_000000_14 5650

View raw message