giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From D Adams <dadam...@gmail.com>
Subject Re: Help with Giraph on Yarn
Date Sun, 23 Nov 2014 15:10:29 GMT
I'm not sure which logs you are referring to, but the following are the
logs that I believe have pertinent information. Please let me know if I
should be looking for different ones than the ones I am showing here.

hadoop-hduser-datanode-Roosevelt.log
2014-11-19 02:10:48,239 INFO
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
BP-966991712-127.0.1.1-1416227756418 Total blocks: 34, missing metadata
files:0, missing block files:0, missing blocks in memory:0, mismatched
blocks:0
2014-11-19 02:26:14,768 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-966991712-127.0.1.1-1416227756418:blk_1073741912_1088 src: /
127.0.0.1:59085 dest: /127.0.0.1:50010
2014-11-19 02:26:14,809 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
127.0.0.1:59085, dest: /127.0.0.1:50010, bytes: 79531, op: HDFS_WRITE,
cliID: DFSClient_NONMAPREDUCE_944541121_1, offset: 0, srvID:
f0f8ef96-2b0d-41a8-b316-03f810393f79, blockid:
BP-966991712-127.0.1.1-1416227756418:blk_1073741912_1088, duration: 38485107
2014-11-19 02:26:14,810 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-966991712-127.0.1.1-1416227756418:blk_1073741912_1088,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-11-19 02:26:15,664 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-966991712-127.0.1.1-1416227756418:blk_1073741913_1089 src: /
127.0.0.1:59086 dest: /127.0.0.1:50010
2014-11-19 02:26:16,179 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification
succeeded for BP-966991712-127.0.1.1-1416227756418:blk_1073741912_1088
2014-11-19 02:26:18,627 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
127.0.0.1:59086, dest: /127.0.0.1:50010, bytes: 49879311, op: HDFS_WRITE,
cliID: DFSClient_NONMAPREDUCE_944541121_1, offset: 0, srvID:
f0f8ef96-2b0d-41a8-b316-03f810393f79, blockid:
BP-966991712-127.0.1.1-1416227756418:blk_1073741913_1089, duration:
2960435773
2014-11-19 02:26:18,652 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-966991712-127.0.1.1-1416227756418:blk_1073741913_1089,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-11-19 02:26:20,774 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
127.0.0.1:50010, dest: /127.0.0.1:59090, bytes: 50268995, op: HDFS_READ,
cliID: DFSClient_NONMAPREDUCE_-325400111_290, offset: 0, srvID:
f0f8ef96-2b0d-41a8-b316-03f810393f79, blockid:
BP-966991712-127.0.1.1-1416227756418:blk_1073741913_1089, duration:
1163407839
2014-11-19 02:26:20,838 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
127.0.0.1:50010, dest: /127.0.0.1:59090, bytes: 80155, op: HDFS_READ,
cliID: DFSClient_NONMAPREDUCE_-325400111_290, offset: 0, srvID:
f0f8ef96-2b0d-41a8-b316-03f810393f79, blockid:
BP-966991712-127.0.1.1-1416227756418:blk_1073741912_1088, duration: 212261
2014-11-19 02:26:29,673 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-966991712-127.0.1.1-1416227756418:blk_1073741914_1090 src: /
127.0.0.1:59093 dest: /127.0.0.1:50010
2014-11-19 02:26:29,717 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
127.0.0.1:59093, dest: /127.0.0.1:50010, bytes: 84216, op: HDFS_WRITE,
cliID: DFSClient_NONMAPREDUCE_-1594099409_1, offset: 0, srvID:
f0f8ef96-2b0d-41a8-b316-03f810393f79, blockid:
BP-966991712-127.0.1.1-1416227756418:blk_1073741914_1090, duration: 40250581
2014-11-19 02:26:29,717 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-966991712-127.0.1.1-1416227756418:blk_1073741914_1090,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-11-19 02:26:34,183 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741912_1088 file
/home/hduser/mydata/hdfs/datanode/current/BP-966991712-127.0.1.1-1416227756418/current/finalized/blk_1073741912
for deletion
2014-11-19 02:26:34,184 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-966991712-127.0.1.1-1416227756418 blk_1073741912_1088 file
/home/hduser/mydata/hdfs/datanode/current/BP-966991712-127.0.1.1-1416227756418/current/finalized/blk_1073741912
2014-11-19 02:27:09,569 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification
succeeded for BP-966991712-127.0.1.1-1416227756418:blk_1073741913_1089
2014-11-19 02:27:09,570 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification
succeeded for BP-966991712-127.0.1.1-1416227756418:blk_1073741914_1090


hadoop-hduser-datanode-Roosevelt.out
ulimit -a for user hduser
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 32148
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32148
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited



hadoop-hduser-namenode-Roosevelt.log
2014-11-19 02:26:14,252 INFO
org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3
Total time for transactions(ms): 0 Number of transactions batched in Syncs:
0 Number of syncs: 2 SyncTimes(ms): 21
2014-11-19 02:26:14,566 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
allocateBlock:
/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-conf.xml.
BP-966991712-127.0.1.1-1416227756418
blk_1073741912_1088{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-64c16a3b-f775-4aeb-9f83-0d3684111b9a:NORMAL|RBW]]}
2014-11-19 02:26:14,810 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 127.0.0.1:50010 is added to
blk_1073741912_1088{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-64c16a3b-f775-4aeb-9f83-0d3684111b9a:NORMAL|RBW]]}
size 0
2014-11-19 02:26:15,028 INFO org.apache.hadoop.hdfs.StateChange: DIR*
completeFile:
/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-conf.xml
is closed by DFSClient_NONMAPREDUCE_944541121_1
2014-11-19 02:26:15,660 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
allocateBlock:
/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar.
BP-966991712-127.0.1.1-1416227756418
blk_1073741913_1089{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-64c16a3b-f775-4aeb-9f83-0d3684111b9a:NORMAL|RBW]]}
2014-11-19 02:26:18,651 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK*
checkFileProgress: blk_1073741913_1089{blockUCState=COMMITTED,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-64c16a3b-f775-4aeb-9f83-0d3684111b9a:NORMAL|RBW]]}
has not reached minimal replication 1
2014-11-19 02:26:18,652 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 127.0.0.1:50010 is added to
blk_1073741913_1089{blockUCState=COMMITTED, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-64c16a3b-f775-4aeb-9f83-0d3684111b9a:NORMAL|RBW]]}
size 49879311
2014-11-19 02:26:19,054 INFO org.apache.hadoop.hdfs.StateChange: DIR*
completeFile:
/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
is closed by DFSClient_NONMAPREDUCE_944541121_1

hadoop-hduser-namenode-Roosevelt.out
Nov 18, 2014 7:26:18 PM com.sun.jersey.api.core.PackagesResourceConfig init
INFO: Scanning for root resource and provider classes in the packages:
  org.apache.hadoop.hdfs.server.namenode.web.resources
  org.apache.hadoop.hdfs.web.resources
Nov 18, 2014 7:26:19 PM com.sun.jersey.api.core.ScanningResourceConfig
logClasses
INFO: Root resource classes found:
  class
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods
Nov 18, 2014 7:26:19 PM com.sun.jersey.api.core.ScanningResourceConfig
logClasses
INFO: Provider classes found:
  class org.apache.hadoop.hdfs.web.resources.UserProvider
  class org.apache.hadoop.hdfs.web.resources.ExceptionHandler
Nov 18, 2014 7:26:19 PM
com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17
AM'
Nov 18, 2014 7:26:20 PM com.sun.jersey.spi.inject.Errors
processErrorMessages
WARNING: The following warnings have been detected with resource and/or
provider classes:
  WARNING: A sub-resource method, public javax.ws.rs.core.Response
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.deleteRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.DeleteOpParam,org.apache.hadoop.hdfs.web.resources.RecursiveParam,org.apache.hadoop.hdfs.web.resources.SnapshotNameParam)
throws java.io.IOException,java.lang.InterruptedException, with URI
template, "/", is treated as a resource method
  WARNING: A sub-resource method, public javax.ws.rs.core.Response
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.putRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.PutOpParam,org.apache.hadoop.hdfs.web.resources.DestinationParam,org.apache.hadoop.hdfs.web.resources.OwnerParam,org.apache.hadoop.hdfs.web.resources.GroupParam,org.apache.hadoop.hdfs.web.resources.PermissionParam,org.apache.hadoop.hdfs.web.resources.OverwriteParam,org.apache.hadoop.hdfs.web.resources.BufferSizeParam,org.apache.hadoop.hdfs.web.resources.ReplicationParam,org.apache.hadoop.hdfs.web.resources.BlockSizeParam,org.apache.hadoop.hdfs.web.resources.ModificationTimeParam,org.apache.hadoop.hdfs.web.resources.AccessTimeParam,org.apache.hadoop.hdfs.web.resources.RenameOptionSetParam,org.apache.hadoop.hdfs.web.resources.CreateParentParam,org.apache.hadoop.hdfs.web.resources.TokenArgumentParam,org.apache.hadoop.hdfs.web.resources.AclPermissionParam,org.apache.hadoop.hdfs.web.resources.XAttrNameParam,org.apache.hadoop.hdfs.web.resources.XAttrValueParam,org.apache.hadoop.hdfs.web.resources.XAttrSetFlagParam,org.apache.hadoop.hdfs.web.resources.SnapshotNameParam,org.apache.hadoop.hdfs.web.resources.OldSnapshotNameParam)
throws java.io.IOException,java.lang.InterruptedException, with URI
template, "/", is treated as a resource method
  WARNING: A sub-resource method, public javax.ws.rs.core.Response
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.postRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.PostOpParam,org.apache.hadoop.hdfs.web.resources.ConcatSourcesParam,org.apache.hadoop.hdfs.web.resources.BufferSizeParam)
throws java.io.IOException,java.lang.InterruptedException, with URI
template, "/", is treated as a resource method
  WARNING: A sub-resource method, public javax.ws.rs.core.Response
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.GetOpParam,org.apache.hadoop.hdfs.web.resources.OffsetParam,org.apache.hadoop.hdfs.web.resources.LengthParam,org.apache.hadoop.hdfs.web.resources.RenewerParam,org.apache.hadoop.hdfs.web.resources.BufferSizeParam,java.util.List,org.apache.hadoop.hdfs.web.resources.XAttrEncodingParam)
throws java.io.IOException,java.lang.InterruptedException, with URI
template, "/", is treated as a resource method



yarn-hduser-nodemanager-Roosevelt.log
2014-11-19 02:26:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Start request for container_1416359131664_0003_01_000001 by user hduser
2014-11-19 02:26:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Creating a new application reference for app application_1416359131664_0003
2014-11-19 02:26:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
Application application_1416359131664_0003 transitioned from NEW to INITING
2014-11-19 02:26:19,497 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
Application application_1416359131664_0003 transitioned from INITING to
RUNNING
2014-11-19 02:26:19,497 INFO
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser
IP=127.0.0.1    OPERATION=Start Container Request
TARGET=ContainerManageImpl    RESULT=SUCCESS
APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000001
2014-11-19 02:26:19,497 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
Adding container_1416359131664_0003_01_000001 to application
application_1416359131664_0003
2014-11-19 02:26:19,497 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000001 transitioned from NEW to
LOCALIZING
2014-11-19 02:26:19,497 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_INIT for appId application_1416359131664_0003
2014-11-19 02:26:19,497 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
Resource
hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
transitioned from INIT to DOWNLOADING
2014-11-19 02:26:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
Resource
hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-conf.xml
transitioned from INIT to DOWNLOADING
2014-11-19 02:26:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
Created localizer for container_1416359131664_0003_01_000001
2014-11-19 02:26:19,506 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
Writing credentials to the nmPrivate file
/app/hadoop/tmp/nm-local-dir/nmPrivate/container_1416359131664_0003_01_000001.tokens.
Credentials list:
2014-11-19 02:26:19,508 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
Initializing user hduser
2014-11-19 02:26:19,521 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying
from
/app/hadoop/tmp/nm-local-dir/nmPrivate/container_1416359131664_0003_01_000001.tokens
to
/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003/container_1416359131664_0003_01_000001.tokens
2014-11-19 02:26:19,522 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set
to
/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003
=
file:/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003
2014-11-19 02:26:20,823 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
Resource
hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar(->/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003/filecache/10/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar)
transitioned from DOWNLOADING to LOCALIZED
2014-11-19 02:26:20,847 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
Resource
hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0003/giraph-conf.xml(->/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003/filecache/11/giraph-conf.xml)
transitioned from DOWNLOADING to LOCALIZED
2014-11-19 02:26:20,848 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000001 transitioned from
LOCALIZING to LOCALIZED
2014-11-19 02:26:21,043 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000001 transitioned from
LOCALIZED to RUNNING
2014-11-19 02:26:21,066 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
launchContainer: [nice, -n, 0, bash,
/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003/container_1416359131664_0003_01_000001/default_container_executor.sh]
2014-11-19 02:26:22,932 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Starting resource-monitoring for container_1416359131664_0003_01_000001
2014-11-19 02:26:23,006 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 25248 for container-id
container_1416359131664_0003_01_000001: 41.4 MB of 1 GB physical memory
used; 654.4 MB of 2.1 GB virtual memory used
2014-11-19 02:26:26,032 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 25248 for container-id
container_1416359131664_0003_01_000001: 117.4 MB of 1 GB physical memory
used; 658.9 MB of 2.1 GB virtual memory used
2014-11-19 02:26:29,052 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 25248 for container-id
container_1416359131664_0003_01_000001: 147.8 MB of 1 GB physical memory
used; 660.5 MB of 2.1 GB virtual memory used
2014-11-19 02:26:30,285 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
Auth successful for appattempt_1416359131664_0003_000001 (auth:SIMPLE)
2014-11-19 02:26:30,291 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Start request for container_1416359131664_0003_01_000003 by user hduser
2014-11-19 02:26:30,292 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
Adding container_1416359131664_0003_01_000003 to application
application_1416359131664_0003
2014-11-19 02:26:30,292 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000003 transitioned from NEW to
LOCALIZING
2014-11-19 02:26:30,292 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_INIT for appId application_1416359131664_0003
2014-11-19 02:26:30,292 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000003 transitioned from
LOCALIZING to LOCALIZED
2014-11-19 02:26:30,292 INFO
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser
IP=127.0.0.1    OPERATION=Start Container Request
TARGET=ContainerManageImpl    RESULT=SUCCESS
APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000003
2014-11-19 02:26:30,316 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Getting container-status for container_1416359131664_0003_01_000003
2014-11-19 02:26:30,316 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
Returning ContainerStatus: [ContainerId:
container_1416359131664_0003_01_000003, State: RUNNING, Diagnostics: ,
ExitStatus: -1000, ]
2014-11-19 02:26:30,342 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000003 transitioned from
LOCALIZED to RUNNING
2014-11-19 02:26:30,364 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
launchContainer: [nice, -n, 0, bash,
/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003/container_1416359131664_0003_01_000003/default_container_executor.sh]
2014-11-19 02:26:30,383 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit
code from container container_1416359131664_0003_01_000003 is : 1
2014-11-19 02:26:30,383 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
Exception from container-launch with container ID:
container_1416359131664_0003_01_000003 and exit code: 1
ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
2014-11-19 02:26:30,384 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-11-19 02:26:30,384 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Container exited with a non-zero exit code 1
2014-11-19 02:26:30,384 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000003 transitioned from RUNNING
to EXITED_WITH_FAILURE
2014-11-19 02:26:30,384 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Cleaning up container container_1416359131664_0003_01_000003
2014-11-19 02:26:30,402 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
Deleting absolute path :
/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1416359131664_0003/container_1416359131664_0003_01_000003
2014-11-19 02:26:30,403 WARN
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser
OPERATION=Container Finished - Failed    TARGET=ContainerImpl
RESULT=FAILURE    DESCRIPTION=Container failed with state:
EXITED_WITH_FAILURE    APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000003
2014-11-19 02:26:30,403 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Container container_1416359131664_0003_01_000003 transitioned from
EXITED_WITH_FAILURE to DONE
2014-11-19 02:26:30,403 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
Removing container_1416359131664_0003_01_000003 from application
application_1416359131664_0003
2014-11-19 02:26:30,403 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got
event CONTAINER_STOP for appId application_1416359131664_0003
2014-11-19 02:26:31,136 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed
completed containers from NM context:
[container_1416359131664_0003_01_000003]


yarn-hduser-resourcemanager-Roosevelt.log
2014-11-19 02:26:13,573 INFO
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated
new applicationId: 3
2014-11-19 02:26:19,074 WARN
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific
max attempts: 0 for application: 3 is invalid, because it is out of the
range [1, 2]. Use the global max attempts instead.
2014-11-19 02:26:19,074 INFO
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application
with id 3 submitted by user hduser
2014-11-19 02:26:19,074 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
IP=127.0.0.1    OPERATION=Submit Application Request
TARGET=ClientRMService    RESULT=SUCCESS
APPID=application_1416359131664_0003
2014-11-19 02:26:19,074 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing
application with id application_1416359131664_0003
2014-11-19 02:26:19,074 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from NEW to NEW_SAVING
2014-11-19 02:26:19,075 INFO
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore:
Storing info for app: application_1416359131664_0003
2014-11-19 02:26:19,075 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from NEW_SAVING to SUBMITTED
2014-11-19 02:26:19,075 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Application added - appId: application_1416359131664_0003 user: hduser
leaf-queue of parent: root #applications: 1
2014-11-19 02:26:19,075 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Accepted application application_1416359131664_0003 from user: hduser, in
queue: default
2014-11-19 02:26:19,075 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from SUBMITTED to ACCEPTED
2014-11-19 02:26:19,076 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
Registering app attempt : appattempt_1416359131664_0003_000001
2014-11-19 02:26:19,076 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from NEW to SUBMITTED
2014-11-19 02:26:19,076 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Application application_1416359131664_0003 from user: hduser activated in
queue: default
2014-11-19 02:26:19,076 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Application added - appId: application_1416359131664_0003 user:
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@ec6584,
leaf-queue: default #user-pending-applications: 0
#user-active-applications: 1 #queue-pending-applications: 0
#queue-active-applications: 1
2014-11-19 02:26:19,076 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Added Application Attempt appattempt_1416359131664_0003_000001 to scheduler
from user hduser in queue default
2014-11-19 02:26:19,076 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from SUBMITTED to
SCHEDULED
2014-11-19 02:26:19,482 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000001 Container Transitioned from NEW to
ALLOCATED
2014-11-19 02:26:19,482 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=AM Allocated Container    TARGET=SchedulerApp
RESULT=SUCCESS    APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000001
2014-11-19 02:26:19,482 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Assigned container container_1416359131664_0003_01_000001 of capacity
<memory:1024, vCores:1> on host roosevelt:58195, which has 1 containers,
<memory:1024, vCores:1> used and <memory:7168, vCores:7> available after
allocation
2014-11-19 02:26:19,482 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignedContainer application attempt=appattempt_1416359131664_0003_000001
container=Container: [ContainerId: container_1416359131664_0003_01_000001,
NodeId: roosevelt:58195, NodeHttpAddress: roosevelt:8042, Resource:
<memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default:
capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>,
usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
clusterResource=<memory:8192, vCores:8>
2014-11-19 02:26:19,482 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting assigned queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>,
usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2014-11-19 02:26:19,482 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125
used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2014-11-19 02:26:19,483 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Sending NMToken for nodeId : roosevelt:58195 for container :
container_1416359131664_0003_01_000001
2014-11-19 02:26:19,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000001 Container Transitioned from
ALLOCATED to ACQUIRED
2014-11-19 02:26:19,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Clear node set for appattempt_1416359131664_0003_000001
2014-11-19 02:26:19,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
Storing attempt: AppId: application_1416359131664_0003 AttemptId:
appattempt_1416359131664_0003_000001 MasterContainer: Container:
[ContainerId: container_1416359131664_0003_01_000001, NodeId:
roosevelt:58195, NodeHttpAddress: roosevelt:8042, Resource: <memory:1024,
vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service:
127.0.1.1:58195 }, ]
2014-11-19 02:26:19,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from SCHEDULED to
ALLOCATED_SAVING
2014-11-19 02:26:19,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from ALLOCATED_SAVING to
ALLOCATED
2014-11-19 02:26:19,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
Launching masterappattempt_1416359131664_0003_000001
2014-11-19 02:26:19,485 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
Setting up container Container: [ContainerId:
container_1416359131664_0003_01_000001, NodeId: roosevelt:58195,
NodeHttpAddress: roosevelt:8042, Resource: <memory:1024, vCores:1>,
Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.1.1:58195
}, ] for AM appattempt_1416359131664_0003_000001
2014-11-19 02:26:19,486 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
Command to launch container container_1416359131664_0003_01_000001 :
${JAVA_HOME}/bin/java -Xmx512M -Xms512M -cp .:${CLASSPATH}
org.apache.giraph.yarn.GiraphApplicationMaster 1><LOG_DIR>/gam-stdout.log
2><LOG_DIR>/gam-stderr.log
2014-11-19 02:26:19,501 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done
launching container Container: [ContainerId:
container_1416359131664_0003_01_000001, NodeId: roosevelt:58195,
NodeHttpAddress: roosevelt:8042, Resource: <memory:1024, vCores:1>,
Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.1.1:58195
}, ] for AM appattempt_1416359131664_0003_000001
2014-11-19 02:26:19,501 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from ALLOCATED to LAUNCHED
2014-11-19 02:26:20,484 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000001 Container Transitioned from ACQUIRED
to RUNNING
2014-11-19 02:26:26,798 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
Auth successful for appattempt_1416359131664_0003_000001 (auth:SIMPLE)
2014-11-19 02:26:26,811 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM
registration appattempt_1416359131664_0003_000001
2014-11-19 02:26:26,811 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
IP=127.0.0.1    OPERATION=Register App Master
TARGET=ApplicationMasterService    RESULT=SUCCESS
APPID=application_1416359131664_0003
APPATTEMPTID=appattempt_1416359131664_0003_000001
2014-11-19 02:26:26,811 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from LAUNCHED to RUNNING
2014-11-19 02:26:26,811 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from ACCEPTED to RUNNING
2014-11-19 02:26:29,130 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000002 Container Transitioned from NEW to
ALLOCATED
2014-11-19 02:26:29,130 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=AM Allocated Container    TARGET=SchedulerApp
RESULT=SUCCESS    APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000002
2014-11-19 02:26:29,130 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Assigned container container_1416359131664_0003_01_000002 of capacity
<memory:3072, vCores:1> on host roosevelt:58195, which has 2 containers,
<memory:4096, vCores:2> used and <memory:4096, vCores:6> available after
allocation
2014-11-19 02:26:29,130 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignedContainer application attempt=appattempt_1416359131664_0003_000001
container=Container: [ContainerId: container_1416359131664_0003_01_000002,
NodeId: roosevelt:58195, NodeHttpAddress: roosevelt:8042, Resource:
<memory:3072, vCores:1>, Priority: 10, Token: null, ] queue=default:
capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>,
usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
clusterResource=<memory:8192, vCores:8>
2014-11-19 02:26:29,130 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting assigned queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:4096, vCores:2>,
usedCapacity=0.5, absoluteUsedCapacity=0.5, numApps=1, numContainers=2
2014-11-19 02:26:29,130 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.5 absoluteUsedCapacity=0.5
used=<memory:4096, vCores:2> cluster=<memory:8192, vCores:8>
2014-11-19 02:26:29,202 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Sending NMToken for nodeId : roosevelt:58195 for container :
container_1416359131664_0003_01_000002
2014-11-19 02:26:29,208 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000002 Container Transitioned from
ALLOCATED to ACQUIRED
2014-11-19 02:26:30,133 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000003 Container Transitioned from NEW to
ALLOCATED
2014-11-19 02:26:30,133 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=AM Allocated Container    TARGET=SchedulerApp
RESULT=SUCCESS    APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000003
2014-11-19 02:26:30,133 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Assigned container container_1416359131664_0003_01_000003 of capacity
<memory:3072, vCores:1> on host roosevelt:58195, which has 3 containers,
<memory:7168, vCores:3> used and <memory:1024, vCores:5> available after
allocation
2014-11-19 02:26:30,133 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignedContainer application attempt=appattempt_1416359131664_0003_000001
container=Container: [ContainerId: container_1416359131664_0003_01_000003,
NodeId: roosevelt:58195, NodeHttpAddress: roosevelt:8042, Resource:
<memory:3072, vCores:1>, Priority: 10, Token: null, ] queue=default:
capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:4096, vCores:2>,
usedCapacity=0.5, absoluteUsedCapacity=0.5, numApps=1, numContainers=2
clusterResource=<memory:8192, vCores:8>
2014-11-19 02:26:30,133 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting assigned queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:7168, vCores:3>,
usedCapacity=0.875, absoluteUsedCapacity=0.875, numApps=1, numContainers=3
2014-11-19 02:26:30,133 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.875 absoluteUsedCapacity=0.875
used=<memory:7168, vCores:3> cluster=<memory:8192, vCores:8>
2014-11-19 02:26:30,231 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000003 Container Transitioned from
ALLOCATED to ACQUIRED
2014-11-19 02:26:31,135 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000003 Container Transitioned from ACQUIRED
to COMPLETED
2014-11-19 02:26:31,135 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp:
Completed container: container_1416359131664_0003_01_000003 in state:
COMPLETED event:FINISHED
2014-11-19 02:26:31,135 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=AM Released Container    TARGET=SchedulerApp    RESULT=SUCCESS
APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000003
2014-11-19 02:26:31,135 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Released container container_1416359131664_0003_01_000003 of capacity
<memory:3072, vCores:1> on host roosevelt:58195, which currently has 2
containers, <memory:4096, vCores:2> used and <memory:4096, vCores:6>
available, release resources=true
2014-11-19 02:26:31,135 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
default used=<memory:4096, vCores:2> numContainers=2 user=hduser
user-resources=<memory:4096, vCores:2>
2014-11-19 02:26:31,136 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
completedContainer container=Container: [ContainerId:
container_1416359131664_0003_01_000003, NodeId: roosevelt:58195,
NodeHttpAddress: roosevelt:8042, Resource: <memory:3072, vCores:1>,
Priority: 10, Token: Token { kind: ContainerToken, service: 127.0.1.1:58195
}, ] queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:4096, vCores:2>, usedCapacity=0.5,
absoluteUsedCapacity=0.5, numApps=1, numContainers=2 cluster=<memory:8192,
vCores:8>
2014-11-19 02:26:31,136 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
completedContainer queue=root usedCapacity=0.5 absoluteUsedCapacity=0.5
used=<memory:4096, vCores:2> cluster=<memory:8192, vCores:8>
2014-11-19 02:26:31,136 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting completed queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:4096, vCores:2>,
usedCapacity=0.5, absoluteUsedCapacity=0.5, numApps=1, numContainers=2
2014-11-19 02:26:31,136 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application attempt appattempt_1416359131664_0003_000001 released container
container_1416359131664_0003_01_000003 on node: host: roosevelt:58195
#containers=2 available=4096 used=4096 with event: FINISHED
2014-11-19 02:38:51,740 INFO
org.apache.hadoop.yarn.util.AbstractLivelinessMonitor:
Expired:container_1416359131664_0003_01_000002 Timed out after 600 secs
2014-11-19 02:38:51,742 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000002 Container Transitioned from ACQUIRED
to EXPIRED
2014-11-19 02:38:51,742 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp:
Completed container: container_1416359131664_0003_01_000002 in state:
EXPIRED event:EXPIRE
2014-11-19 02:38:51,742 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=AM Released Container    TARGET=SchedulerApp    RESULT=SUCCESS
APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000002
2014-11-19 02:38:51,742 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Released container container_1416359131664_0003_01_000002 of capacity
<memory:3072, vCores:1> on host roosevelt:58195, which currently has 1
containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7>
available, release resources=true
2014-11-19 02:38:51,742 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
default used=<memory:1024, vCores:1> numContainers=1 user=hduser
user-resources=<memory:1024, vCores:1>
2014-11-19 02:38:51,744 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
completedContainer container=Container: [ContainerId:
container_1416359131664_0003_01_000002, NodeId: roosevelt:58195,
NodeHttpAddress: roosevelt:8042, Resource: <memory:3072, vCores:1>,
Priority: 10, Token: Token { kind: ContainerToken, service: 127.0.1.1:58195
}, ] queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:1024, vCores:1>, usedCapacity=0.125,
absoluteUsedCapacity=0.125, numApps=1, numContainers=1
cluster=<memory:8192, vCores:8>
2014-11-19 02:38:51,744 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
completedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125
used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2014-11-19 02:38:51,744 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting completed queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>,
usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2014-11-19 02:38:51,744 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application attempt appattempt_1416359131664_0003_000001 released container
container_1416359131664_0003_01_000002 on node: host: roosevelt:58195
#containers=1 available=7168 used=1024 with event: EXPIRE
2014-11-19 02:38:51,978 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
Updating application attempt appattempt_1416359131664_0003_000001 with
final state: FINISHING, and exit status: -1000
2014-11-19 02:38:51,978 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from RUNNING to
FINAL_SAVING
2014-11-19 02:38:51,979 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating
application application_1416359131664_0003 with final state: FINISHING
2014-11-19 02:38:51,979 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from RUNNING to FINAL_SAVING
2014-11-19 02:38:51,979 INFO
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore:
Updating info for app: application_1416359131664_0003
2014-11-19 02:38:51,980 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from FINAL_SAVING to
FINISHING
2014-11-19 02:38:51,980 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from FINAL_SAVING to FINISHING
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1416359131664_0003_01_000001 Container Transitioned from RUNNING
to COMPLETED
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp:
Completed container: container_1416359131664_0003_01_000001 in state:
COMPLETED event:FINISHED
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=AM Released Container    TARGET=SchedulerApp    RESULT=SUCCESS
APPID=application_1416359131664_0003
CONTAINERID=container_1416359131664_0003_01_000001
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Released container container_1416359131664_0003_01_000001 of capacity
<memory:1024, vCores:1> on host roosevelt:58195, which currently has 0
containers, <memory:0, vCores:0> used and <memory:8192, vCores:8>
available, release resources=true
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
default used=<memory:0, vCores:0> numContainers=0 user=hduser
user-resources=<memory:0, vCores:0>
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
completedContainer container=Container: [ContainerId:
container_1416359131664_0003_01_000001, NodeId: roosevelt:58195,
NodeHttpAddress: roosevelt:8042, Resource: <memory:1024, vCores:1>,
Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.1.1:58195
}, ] queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192,
vCores:8>
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0
used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting completed queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2014-11-19 02:38:52,773 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application attempt appattempt_1416359131664_0003_000001 released container
container_1416359131664_0003_01_000001 on node: host: roosevelt:58195
#containers=0 available=8192 used=0 with event: FINISHED
2014-11-19 02:38:52,774 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
Unregistering app attempt : appattempt_1416359131664_0003_000001
2014-11-19 02:38:52,774 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1416359131664_0003_000001 State change from FINISHING to FINISHED
2014-11-19 02:38:52,774 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1416359131664_0003 State change from FINISHING to FINISHED
2014-11-19 02:38:52,775 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser
OPERATION=Application Finished - Succeeded    TARGET=RMAppManager
RESULT=SUCCESS    APPID=application_1416359131664_0003
2014-11-19 02:38:52,775 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary:
appId=application_1416359131664_0003,name=Giraph:
org.apache.giraph.examples.SimpleShortestPathsComputation,user=hduser,queue=default,state=FINISHED,trackingUrl=
http://roosevelt:8088/proxy/application_1416359131664_0003/A,appMasterHost=,startTime=1416385579074,finishTime=1416386331979,finalStatus=FAILED
2014-11-19 02:38:52,775 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application Attempt appattempt_1416359131664_0003_000001 is done.
finalState=FINISHED

On Sun, Nov 23, 2014 at 8:34 AM, D Adams <dadamszx@gmail.com> wrote:

> Sorry, in the previous email, I meant to say: when I run with only
> -yj giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
> the container log complains that it did not contain giraph-core and those
> were the resulting errors.
>
>
>
> On Sun, Nov 23, 2014 at 9:28 AM, D Adams <dadamszx@gmail.com> wrote:
>
>> Well, maybe my giraph build did not build correctly because when I run
>>
>> 2014-11-19 02:04:21,764 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:main(421)) - Starting GitaphAM
>> 2014-11-19 02:04:23,137 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>> 2014-11-19 02:04:25,548 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:<init>(168)) - GiraphAM  for ContainerId container_1416359131664_0002_01_000001 ApplicationAttemptId appattempt_1416359131664_0002_000001
>> 2014-11-19 02:04:25,890 INFO  [main] client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at localhost/127.0.0.1:8030
>> 2014-11-19 02:04:25,924 INFO  [main] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:serviceInit(107)) - Upper bound of the thread pool size is 500
>> 2014-11-19 02:04:25,925 INFO  [main] impl.ContainerManagementProtocolProxy (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
>> 2014-11-19 02:04:26,309 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:setupContainerAskForRM(279)) - Requested container ask: Capability[<memory:3000, vCores:0>]Priority[10]
>> 2014-11-19 02:04:26,577 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:setupContainerAskForRM(279)) - Requested container ask: Capability[<memory:3000, vCores:0>]Priority[10]
>> 2014-11-19 02:04:26,577 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:run(185)) - Wait to finish ..
>> 2014-11-19 02:04:28,612 INFO  [AMRM Heartbeater thread] impl.AMRMClientImpl (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : roosevelt:58195
>> 2014-11-19 02:04:28,614 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(605)) - Got response from RM for container ask, allocatedCnt=1
>> 2014-11-19 02:04:28,614 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(608)) - Total allocated # of container so far : 1 allocated out of 2 required.
>> 2014-11-19 02:04:28,614 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:startContainerLaunchingThreads(359)) - Launching command on a new container., containerId=container_1416359131664_0002_01_000002, containerNode=roosevelt:58195, containerNodeURI=roosevelt:8042, containerResourceMemory=3072
>> 2014-11-19 02:04:28,615 INFO  [pool-4-thread-1] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(492)) - Setting up container launch container for containerid=container_1416359131664_0002_01_000002
>> 2014-11-19 02:04:28,629 INFO  [pool-4-thread-1] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(498)) - Conatain launch Commands :java -Xmx3000M -Xms3000M -cp .:${CLASSPATH} org.apache.giraph.yarn.GiraphYarnTask 1416359131664 2 2 1 1><LOG_DIR>/task-2-stdout.log 2><LOG_DIR>/task-2-stderr.log
>> 2014-11-19 02:04:28,630 INFO  [pool-4-thread-1] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(518)) - Setting username in ContainerLaunchContext to: hduser
>> 2014-11-19 02:04:29,149 INFO  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFsResourcesToMap(72)) - Adding giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar to LocalResources for export.to hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0002/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>> 2014-11-19 02:04:29,188 INFO  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFileToResourceMap(160)) - Registered file in LocalResources :: hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0002/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar*2014-11-19 02:04:29,188 WARN  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFsResourcesToMap(81)) - Job jars (-yj option) didn't include giraph-core.*
>> 2014-11-19 02:04:29,189 INFO  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFileToResourceMap(160)) - Registered file in LocalResources :: hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0002/giraph-conf.xml
>> 2014-11-19 02:04:29,201 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing Event EventType: START_CONTAINER for Container container_1416359131664_0002_01_000002
>> 2014-11-19 02:04:29,202 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] impl.ContainerManagementProtocolProxy (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : roosevelt:58195
>> 2014-11-19 02:04:29,257 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing Event EventType: QUERY_CONTAINER for Container container_1416359131664_0002_01_000002
>> 2014-11-19 02:04:29,618 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(605)) - Got response from RM for container ask, allocatedCnt=1
>> 2014-11-19 02:04:29,618 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(608)) - Total allocated # of container so far : 2 allocated out of 2 required.
>> 2014-11-19 02:04:29,618 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:startContainerLaunchingThreads(359)) - Launching command on a new container., containerId=container_1416359131664_0002_01_000003, containerNode=roosevelt:58195, containerNodeURI=roosevelt:8042, containerResourceMemory=3072
>> 2014-11-19 02:04:29,619 INFO  [pool-4-thread-2] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(492)) - Setting up container launch container for containerid=container_1416359131664_0002_01_000003
>> 2014-11-19 02:04:29,619 INFO  [pool-4-thread-2] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(498)) - Conatain launch Commands :java -Xmx3000M -Xms3000M -cp .:${CLASSPATH} org.apache.giraph.yarn.GiraphYarnTask 1416359131664 2 3 1 1><LOG_DIR>/task-3-stdout.log 2><LOG_DIR>/task-3-stderr.log
>> 2014-11-19 02:04:29,619 INFO  [pool-4-thread-2] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(518)) - Setting username in ContainerLaunchContext to: hduser
>> 2014-11-19 02:04:29,620 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #2] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing Event EventType: START_CONTAINER for Container container_1416359131664_0002_01_000003
>> 2014-11-19 02:04:29,623 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #3] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing Event EventType: QUERY_CONTAINER for Container container_1416359131664_0002_01_000003
>> 2014-11-19 02:04:30,620 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(571)) - Got response from RM for container ask, completedCnt=2
>> 2014-11-19 02:04:30,620 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(574)) - Got container status for containerID=container_1416359131664_0002_01_000002, state=COMPLETE, exitStatus=1, diagnostics=Exception from container-launch: ExitCodeException exitCode=1:
>> ExitCodeException exitCode=1:
>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>> 	at org.apache.hadoop.util.Shell.run(Shell.java:455)
>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>> 	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> 	at java.lang.Thread.run(Thread.java:745)
>>
>>
>> Container exited with a non-zero exit code 1
>>
>> 2014-11-19 02:04:30,620 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(574)) - Got container status for containerID=container_1416359131664_0002_01_000003, state=COMPLETE, exitStatus=1, diagnostics=Exception from container-launch: ExitCodeException exitCode=1:
>> ExitCodeException exitCode=1:
>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>> 	at org.apache.hadoop.util.Shell.run(Shell.java:455)
>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>> 	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> 	at java.lang.Thread.run(Thread.java:745)
>>
>>
>> Container exited with a non-zero exit code 1
>>
>> 2014-11-19 02:04:30,620 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(594)) - All container compeleted. done = true
>> 2014-11-19 02:04:30,791 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:run(194)) - Done true
>> 2014-11-19 02:04:30,791 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:run(198)) - Forcefully terminating executors with done =:true
>> 2014-11-19 02:04:30,791 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:finish(212)) - Application completed. Stopping running containers
>> 2014-11-19 02:04:30,803 INFO  [main] impl.ContainerManagementProtocolProxy (ContainerManagementProtocolProxy.java:mayBeCloseProxy(145)) - Closing proxy : roosevelt:58195
>> 2014-11-19 02:04:30,803 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:finish(217)) - Application completed. Signalling finish to RM
>> 2014-11-19 02:04:30,815 INFO  [main] impl.AMRMClientImpl (AMRMClientImpl.java:unregisterApplicationMaster(321)) - Waiting for application to be successfully unregistered.
>> 2014-11-19 02:04:30,923 INFO  [AMRM Callback Handler Thread] impl.AMRMClientAsyncImpl (AMRMClientAsyncImpl.java:run(277)) - Interrupted while waiting for queue
>> java.lang.InterruptedException
>> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
>> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
>> 	at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>> 	at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:275)
>> 2014-11-19 02:04:30,944 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:main(445)) - Giraph Application Master failed. exiting
>>
>> In a moment, I will paste all the logs I can find.
>>
>> On Sun, Nov 23, 2014 at 1:11 AM, Tripti Singh <tripti@yahoo-inc.com>
>> wrote:
>>
>>>   Sorry, the e-mail was sent without me being able to complete it.
>>>  The issue I was facing was due to a Giraph’s hard coded dependency on a
>>> task with id 0. This task is the designated Master responsible for creating
>>> a zookeeperServerList.
>>>  These task ids are assigned using ContainerId as input and with the
>>> assumption that we will be assigned containers starting with id of 2 (1 for
>>> Application Master)
>>>  Following is the comment available in Giraph Yarn class
>>>
>>>  /**
>>>
>>>    * Utility to create a TaskAttemptId we can feed to our fake
>>> Mapper#Context.
>>>
>>>    *
>>>
>>>    * NOTE: ContainerId will serve as MR TaskID for Giraph tasks.
>>>
>>>    * YARN container 1 is always AppMaster, so the least container id we
>>> will
>>>
>>>    * ever get from YARN for a Giraph task is container id 2. Giraph on
>>> MapReduce
>>>
>>>    * tasks must start at index 0. So we SUBTRACT TWO from each container
>>> id.
>>>
>>>    *
>>>
>>>    * @param args the command line args, fed to us by
>>> GiraphApplicationMaster.
>>>
>>>    * @return the TaskAttemptId object, populated with YARN job data.
>>>
>>>    */
>>>
>>>
>>>  Now, my problem was that in our cluster, this guarantee was not met
>>> and at times, I was not assigned the Container with id of 2 (And container
>>> ids are also not assigned in increasing order).
>>>
>>> Due to this, the Zookeeper was not starting in many workflow runs.
>>>
>>> The fix I did, was to get rid of this issue.
>>>
>>>
>>>  Thanks,
>>>  Tripti
>>>    From: Tripti Singh <tripti@yahoo-inc.com>
>>> Reply-To: "user@giraph.apache.org" <user@giraph.apache.org>
>>> Date: Sunday, November 23, 2014 at 12:23 PM
>>>
>>> To: "user@giraph.apache.org" <user@giraph.apache.org>
>>> Subject: Re: Help with Giraph on Yarn
>>>
>>>   Hi Das,
>>> Could u try what Alessandro is suggesting. Giraph-example jar has all
>>> the classes required..
>>> Also, you have shared only AM logs. Can u also add the worker logs?
>>> From a quick look at the AM, I’m not sure if this is the same problem
>>> that I faced.
>>>
>>>  Thanks,
>>>  Tripti
>>>    From: Alessandro Negro <alenegro81@yahoo.it>
>>> Reply-To: "user@giraph.apache.org" <user@giraph.apache.org>
>>> Date: Sunday, November 23, 2014 at 5:10 AM
>>> To: "user@giraph.apache.org" <user@giraph.apache.org>
>>> Subject: Re: Help with Giraph on Yarn
>>>
>>>   HI Das,
>>> what about the hadoop logs? Have a look there and let me know.
>>> In the meantime you can remove redundant jars from yarn jars (exampe
>>> contains all jars needed):
>>>
>>>   yarnjars=giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>>
>>>
>>>  Should be enough
>>>
>>>  Alessandro.
>>>
>>>   Il giorno 23/nov/2014, alle ore 00:16, D Adams <dadamszx@gmail.com>
>>> ha scritto:
>>>
>>>    Hi Alessandro,
>>>       Do you run tyarn that way because your hadoop or giraph file
>>> permissions are chowned by a user named yarn?
>>>
>>>  I have made changes as suggested on
>>> http://mail-archives.apache.org/mod_mbox/giraph-user/201408.mbox/%3C53F4C689.5060101@web.de%3E
>>>
>>>  And although things do look more hopeful, I still get an infinite loop.
>>> I run with as script which includes:
>>>
>>> user_dir=/user/hduser
>>> jar=giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>> runner=org.apache.giraph.GiraphRunner
>>> computation=org.apache.giraph.examples.SimpleShortestPathsComputation
>>>
>>> informat=org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
>>> outformat=org.apache.giraph.io.formats.IdWithValueTextOutputFormat
>>>
>>> yarnjars=giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar,giraph-core-1.1.0-SNAPSHOT.jar,giraph-1.1.0-hadoop-2.5.1.jar
>>>
>>> bin/hdfs dfs -rm -r $user_dir/output/shortestpaths
>>> bin/yarn jar $GIRAPH_HOME/$jar $runner -Dgiraph.yarn.task.heap.mb=3000
>>> $computation -vif $informat -vip $user_dir/input/tiny_graph.txt -vof
>>> $outformat -op $user_dir/output/shortestpaths -w 1 -yj $yarnjars
>>>
>>>  And the resulting stdout:
>>> hduser@Roosevelt:/usr/local/hadoop$ ./hduser_jobs.sh
>>> Running hduser script ...
>>> 14/11/18 19:05:47 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes where
>>> applicable
>>> rm: `/user/hduser/output/shortestpaths': No such file or directory
>>> 14/11/18 19:06:13 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes where
>>> applicable
>>> 14/11/18 19:06:15 INFO utils.ConfigurationUtils: No edge input format
>>> specified. Ensure your InputFormat does not require one.
>>> 14/11/18 19:06:15 INFO utils.ConfigurationUtils: No edge output format
>>> specified. Ensure your OutputFormat does not require one.
>>> 14/11/18 19:06:16 INFO yarn.GiraphYarnClient: Final output path is:
>>> hdfs://localhost:9000/user/hduser/output/shortestpaths
>>> 14/11/18 19:06:16 INFO yarn.GiraphYarnClient: Running Client
>>> 14/11/18 19:06:16 INFO client.RMProxy: Connecting to ResourceManager at
>>> localhost/127.0.0.1:8050
>>> 14/11/18 19:06:17 INFO yarn.GiraphYarnClient: Got node report from ASM
>>> for, nodeId=roosevelt:58195, nodeAddress roosevelt:8042, nodeRackName
>>> /default-rack, nodeNumContainers 0
>>> 14/11/18 19:06:17 INFO yarn.GiraphYarnClient: Obtained new Application
>>> ID: application_1416359131664_0001
>>> 14/11/18 19:06:17 INFO Configuration.deprecation: mapred.job.id is
>>> deprecated. Instead, use mapreduce.job.id
>>> 14/11/18 19:06:17 INFO yarn.GiraphYarnClient: Set the environment for
>>> the application master
>>> 14/11/18 19:06:17 INFO yarn.GiraphYarnClient: Environment for AM
>>> :{CLASSPATH=${CLASSPATH}:./*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*}
>>> 14/11/18 19:06:17 INFO yarn.GiraphYarnClient: buildLocalResourceMap ....
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: Registered file in LocalResources
>>> ::
>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0001/giraph-conf.xml
>>> 14/11/18 19:06:19 INFO yarn.GiraphYarnClient: LIB JARS
>>> :giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar,giraph-core-1.1.0-SNAPSHOT.jar,giraph-1.1.0-hadoop-2.5.1.jar
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: Class path name .
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: base path checking .
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: Class path name /usr/local/hadoop
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: base path checking
>>> /usr/local/hadoop
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: Class path name null
>>> 14/11/18 19:06:19 INFO yarn.YarnUtils: base path checking null
>>> 14/11/18 19:06:19 INFO yarn.GiraphYarnClient: Made local resource for
>>> :/usr/local/hadoop/share/myLib/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>> to
>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0001/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>> 14/11/18 19:06:23 INFO yarn.YarnUtils: Registered file in LocalResources
>>> ::
>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0001/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>> 14/11/18 19:06:23 INFO yarn.GiraphYarnClient:
>>> ApplicationSumbissionContext for GiraphApplicationMaster launch container
>>> is populated.
>>> 14/11/18 19:06:23 INFO yarn.GiraphYarnClient: Submitting application to
>>> ASM
>>> 14/11/18 19:06:23 INFO impl.YarnClientImpl: Submitted application
>>> application_1416359131664_0001
>>> 14/11/18 19:06:23 INFO yarn.GiraphYarnClient: Got new appId after
>>> submission :application_1416359131664_0001
>>> 14/11/18 19:06:23 INFO yarn.GiraphYarnClient: GiraphApplicationMaster
>>> container request was submitted to ResourceManager for job: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation
>>> 14/11/18 19:06:24 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 0.90
>>> secs
>>> 14/11/18 19:06:24 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: ACCEPTED, Containers used: 0
>>> 14/11/18 19:06:28 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 5.05
>>> secs
>>> 14/11/18 19:06:28 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: ACCEPTED, Containers used: 1
>>> 14/11/18 19:06:33 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 9.89
>>> secs
>>> 14/11/18 19:06:33 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: ACCEPTED, Containers used: 1
>>> 14/11/18 19:06:37 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 14.37
>>> secs
>>> 14/11/18 19:06:37 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: ACCEPTED, Containers used: 1
>>> 14/11/18 19:06:42 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 18.81
>>> secs
>>> 14/11/18 19:06:42 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: ACCEPTED, Containers used: 1
>>> 14/11/18 19:06:46 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 22.82
>>> secs
>>> 14/11/18 19:06:46 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:06:50 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 26.83
>>> secs
>>> 14/11/18 19:06:50 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:06:54 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 30.83
>>> secs
>>> 14/11/18 19:06:54 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:06:58 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 34.85
>>> secs
>>> 14/11/18 19:06:58 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:07:02 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 38.85
>>> secs
>>> 14/11/18 19:07:02 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:07:06 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 42.86
>>> secs
>>> 14/11/18 19:07:06 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:07:10 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 46.88
>>> secs
>>> 14/11/18 19:07:10 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:07:14 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 50.89
>>> secs
>>> 14/11/18 19:07:14 INFO yarn.GiraphYarnClient:
>>> appattempt_1416359131664_0001_000001, State: RUNNING, Containers used: 2
>>> 14/11/18 19:07:18 INFO yarn.GiraphYarnClient: Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 54.90
>>> secs
>>>
>>>  Once yarn hits the RUNNING state, it enters an endless loop. It seems
>>> it is caused by the following:
>>>
>>>
>>>  2014-11-18 19:06:37,382 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:main(421)) - Starting GitaphAM
>>> 2014-11-18 19:06:39,214 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>>> 2014-11-18 19:06:41,438 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:<init>(168)) - GiraphAM  for ContainerId container_1416359131664_0001_01_000001 ApplicationAttemptId appattempt_1416359131664_0001_000001
>>> 2014-11-18 19:06:41,497 INFO  [main] client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at localhost/127.0.0.1:8030
>>> 2014-11-18 19:06:41,581 INFO  [main] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:serviceInit(107)) - Upper bound of the thread pool size is 500
>>> 2014-11-18 19:06:41,583 INFO  [main] impl.ContainerManagementProtocolProxy (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
>>> 2014-11-18 19:06:42,355 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:setupContainerAskForRM(279)) - Requested container ask: Capability[<memory:3000, vCores:0>]Priority[10]
>>> 2014-11-18 19:06:42,370 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:setupContainerAskForRM(279)) - Requested container ask: Capability[<memory:3000, vCores:0>]Priority[10]
>>> 2014-11-18 19:06:42,618 INFO  [main] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:run(185)) - Wait to finish ..
>>> 2014-11-18 19:06:44,673 INFO  [AMRM Heartbeater thread] impl.AMRMClientImpl (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : roosevelt:58195
>>> 2014-11-18 19:06:44,677 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(605)) - Got response from RM for container ask, allocatedCnt=1
>>> 2014-11-18 19:06:44,677 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(608)) - Total allocated # of container so far : 1 allocated out of 2 required.
>>> 2014-11-18 19:06:44,678 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:startContainerLaunchingThreads(359)) - Launching command on a new container., containerId=container_1416359131664_0001_01_000002, containerNode=roosevelt:58195, containerNodeURI=roosevelt:8042, containerResourceMemory=3072
>>> 2014-11-18 19:06:44,679 INFO  [pool-4-thread-1] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(492)) - Setting up container launch container for containerid=container_1416359131664_0001_01_000002
>>> 2014-11-18 19:06:44,694 INFO  [pool-4-thread-1] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(498)) - Conatain launch Commands :java -Xmx3000M -Xms3000M -cp .:${CLASSPATH} org.apache.giraph.yarn.GiraphYarnTask 1416359131664 1 2 1 1><LOG_DIR>/task-2-stdout.log 2><LOG_DIR>/task-2-stderr.log
>>> 2014-11-18 19:06:44,694 INFO  [pool-4-thread-1] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(518)) - Setting username in ContainerLaunchContext to: hduser
>>> 2014-11-18 19:06:45,191 INFO  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFsResourcesToMap(72)) - Adding giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar to LocalResources for export.to hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0001/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>> 2014-11-18 19:06:45,228 INFO  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFileToResourceMap(160)) - Registered file in LocalResources :: hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0001/giraph-examples-1.1.0-hadoop-2.5.1-jar-with-dependencies.jar
>>> 2014-11-18 19:06:45,228 INFO  [pool-4-thread-1] yarn.YarnUtils (YarnUtils.java:addFsResourcesToMap(72)) - Adding giraph-core-1.1.0-SNAPSHOT.jar to LocalResources for export.to hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416359131664_0001/giraph-core-1.1.0-SNAPSHOT.jar
>>> 2014-11-18 19:06:45,683 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(605)) - Got response from RM for container ask, allocatedCnt=1
>>> 2014-11-18 19:06:45,683 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersAllocated(608)) - Total allocated # of container so far : 2 allocated out of 2 required.
>>> 2014-11-18 19:06:45,683 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:startContainerLaunchingThreads(359)) - Launching command on a new container., containerId=container_1416359131664_0001_01_000003, containerNode=roosevelt:58195, containerNodeURI=roosevelt:8042, containerResourceMemory=3072
>>> 2014-11-18 19:06:45,684 INFO  [pool-4-thread-3] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(492)) - Setting up container launch container for containerid=container_1416359131664_0001_01_000003
>>> 2014-11-18 19:06:45,684 INFO  [pool-4-thread-3] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(498)) - Conatain launch Commands :java -Xmx3000M -Xms3000M -cp .:${CLASSPATH} org.apache.giraph.yarn.GiraphYarnTask 1416359131664 1 3 1 1><LOG_DIR>/task-3-stdout.log 2><LOG_DIR>/task-3-stderr.log
>>> 2014-11-18 19:06:45,684 INFO  [pool-4-thread-3] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:buildContainerLaunchContext(518)) - Setting username in ContainerLaunchContext to: hduser
>>> 2014-11-18 19:06:45,697 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing Event EventType: START_CONTAINER for Container container_1416359131664_0001_01_000003
>>> 2014-11-18 19:06:45,699 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] impl.ContainerManagementProtocolProxy (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : roosevelt:58195
>>> 2014-11-18 19:06:45,761 INFO  [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing Event EventType: QUERY_CONTAINER for Container container_1416359131664_0001_01_000003
>>> 2014-11-18 19:06:46,687 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(571)) - Got response from RM for container ask, completedCnt=1
>>> 2014-11-18 19:06:46,688 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(574)) - Got container status for containerID=container_1416359131664_0001_01_000003, state=COMPLETE, exitStatus=1, diagnostics=Exception from container-launch: ExitCodeException exitCode=1:
>>> ExitCodeException exitCode=1:
>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:455)
>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>>> 	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> 	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> 	at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>> Container exited with a non-zero exit code 1
>>>
>>> 2014-11-18 19:06:46,688 INFO  [AMRM Callback Handler Thread] yarn.GiraphApplicationMaster (GiraphApplicationMaster.java:onContainersCompleted(596)) - After completion of one conatiner. current status is: completedCount :1 containersToLaunch :2 successfulCount :0 failedCount :1
>>>
>>>
>>> On Sat, Nov 22, 2014 at 11:42 AM, D Adams <dadamszx@gmail.com> wrote:
>>>
>>>>  I'm sorry, please excuse my ignorance, but, I have no idea where to
>>>> find my configuration. That is, I have no idea where these lines came from
>>>> (or what file):
>>>>
>>>> yarn             1029   0,1  1,2  3499572 209560 s001  S    10:29am
>>>> 0:06.04 /Library/Java/
>>>> JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/bin/java
>>>> -Dproc_resourcemanager -Xmx500m
>>>> -Dhadoop.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>> -Dyarn.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>> ...
>>>>  As far as configuration goes, I'm only really aware of the
>>>> etc/hadoop/*-env.sh and .xml files and main pom.xml file. Or is the above
>>>> part of the command you use to run giraph jobs?
>>>>  I will make changes to hadoop-env.sh file as you suggest and let you
>>>> know how it works out.
>>>>
>>>> On Fri, Nov 21, 2014 at 6:05 AM, Alessandro Negro <alenegro81@yahoo.it>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>> I forgot to say that I added also this line:
>>>>>
>>>>>  export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$MYLIB
>>>>>
>>>>>
>>>>>  Il giorno 21/nov/2014, alle ore 10:34, Alessandro Negro <
>>>>> alenegro81@yahoo.it> ha scritto:
>>>>>
>>>>>  Hi Das,
>>>>> this is my configuration of hadoop:
>>>>>
>>>>>  yarn             1029   0,1  1,2  3499572 209560 s001  S    10:29am
>>>>>   0:06.04
>>>>> /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/bin/java
>>>>> -Dproc_resourcemanager -Xmx500m
>>>>> -Dhadoop.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dyarn.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dhadoop.log.file=yarn-yarn-resourcemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.log.file=yarn-yarn-resourcemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,RFA
>>>>> -Dyarn.root.logger=INFO,RFA
>>>>> -Djava.library.path=/opt/yarn/hadoop-2.5.1/lib/native
>>>>> -Dyarn.policy.file=hadoop-policy.xml
>>>>> -Dhadoop.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dyarn.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dhadoop.log.file=yarn-yarn-resourcemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.log.file=yarn-yarn-resourcemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.home.dir=/opt/yarn/hadoop-2.5.1
>>>>> -Dhadoop.home.dir=/opt/yarn/hadoop-2.5.1 -Dhadoop.root.logger=INFO,RFA
>>>>> -Dyarn.root.logger=INFO,RFA
>>>>> -Djava.library.path=/opt/yarn/hadoop-2.5.1/lib/native -classpath
>>>>> /opt/yarn/hadoop-2.5.1/etc/hadoop:/opt/yarn/hadoop-2.5.1/etc/hadoop:/opt/yarn/hadoop-2.5.1/etc/hadoop:/opt/yarn/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/common/*:/opt/yarn/hadoop-2.5.1/share/hadoop/hdfs:/opt/yarn/hadoop-2.5.1/share/hadoop/hdfs/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/hdfs/*:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/*:/opt/yarn/hadoop-2.5.1/share/hadoop/mapreduce/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/mapreduce/*::/opt/yarn/hadoop-2.5.1/share/myLib/*.jar:/Users/ale/extprj/giraph/myGiraph/*.jar:/Users/ale/extprj/giraph/myGiraph/lib/*.jar:/contrib/capacity-scheduler/*.jar:/opt/yarn/hadoop-2.5.1/share/myLib/*.jar:/Users/ale/extprj/giraph/myGiraph/*.jar:/Users/ale/extprj/giraph/myGiraph/lib/*.jar:/contrib/capacity-scheduler/*.jar:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/*:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/lib/*:/opt/yarn/hadoop-2.5.1/etc/hadoop/rm-config/log4j.properties
>>>>> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
>>>>> yarn             1073   0,1  1,0  3347132 174644 s001  S    10:29am
>>>>> 0:04.77
>>>>> /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/bin/java
>>>>> -Dproc_nodemanager -Xmx500m -Dhadoop.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dyarn.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dhadoop.log.file=yarn-yarn-nodemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.log.file=yarn-yarn-nodemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,RFA
>>>>> -Dyarn.root.logger=INFO,RFA
>>>>> -Djava.library.path=/opt/yarn/hadoop-2.5.1/lib/native
>>>>> -Dyarn.policy.file=hadoop-policy.xml -server
>>>>> -Dhadoop.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dyarn.log.dir=/opt/yarn/hadoop-2.5.1/logs
>>>>> -Dhadoop.log.file=yarn-yarn-nodemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.log.file=yarn-yarn-nodemanager-MacBook-Pro-di-Alessandro.local.log
>>>>> -Dyarn.home.dir=/opt/yarn/hadoop-2.5.1
>>>>> -Dhadoop.home.dir=/opt/yarn/hadoop-2.5.1 -Dhadoop.root.logger=INFO,RFA
>>>>> -Dyarn.root.logger=INFO,RFA
>>>>> -Djava.library.path=/opt/yarn/hadoop-2.5.1/lib/native -classpath
>>>>> /opt/yarn/hadoop-2.5.1/etc/hadoop:/opt/yarn/hadoop-2.5.1/etc/hadoop:/opt/yarn/hadoop-2.5.1/etc/hadoop:/opt/yarn/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/common/*:/opt/yarn/hadoop-2.5.1/share/hadoop/hdfs:/opt/yarn/hadoop-2.5.1/share/hadoop/hdfs/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/hdfs/*:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/*:/opt/yarn/hadoop-2.5.1/share/hadoop/mapreduce/lib/*:/opt/yarn/hadoop-2.5.1/share/hadoop/mapreduce/*::/opt/yarn/hadoop-2.5.1/share/myLib/*.jar:/Users/ale/extprj/giraph/myGiraph/*.jar:/Users/ale/extprj/giraph/myGiraph/lib/*.jar:/contrib/capacity-scheduler/*.jar:/opt/yarn/hadoop-2.5.1/share/myLib/*.jar:/Users/ale/extprj/giraph/myGiraph/*.jar:/Users/ale/extprj/giraph/myGiraph/lib/*.jar:/contrib/capacity-scheduler/*.jar:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/*:/opt/yarn/hadoop-2.5.1/share/hadoop/yarn/lib/*:/opt/yarn/hadoop-2.5.1/etc/hadoop/nm-config/log4j.properties
>>>>> org.apache.hadoop.yarn.server.nodemanager.NodeManager
>>>>>
>>>>>  So I run it as yarn user in this way:
>>>>>
>>>>>  sudo -u yarn ….
>>>>>
>>>>>  Moreover I have just notice that I added the following lines into
>>>>> the hadoop-env.sh
>>>>>
>>>>>  hadoop-env.sh:export
>>>>> MYLIB=/opt/yarn/hadoop-2.5.1/share/myLib/*.jar:/Users/ale/extprj/giraph/myGiraph/*.jar:/Users/ale/extprj/giraph/myGiraph/lib/*.jar
>>>>>
>>>>>  Accordingly to this other thread:
>>>>>
>>>>>
>>>>> http://mail-archives.apache.org/mod_mbox/giraph-user/201408.mbox/%3C53F4C689.5060101@web.de%3E
>>>>>
>>>>>  Let me know,
>>>>> Alessandro
>>>>>
>>>>>
>>>>>  Il giorno 20/nov/2014, alle ore 23:51, D Adams <dadamszx@gmail.com>
>>>>> ha scritto:
>>>>>
>>>>>  --f46d044283ecf837440508522939
>>>>> Content-Type: text/plain; charset=UTF-8
>>>>> Content-Transfer-Encoding: quoted-printable
>>>>>
>>>>> Ok, in the following, I use bin/yarn. Doesn't seem like much has
>>>>> changed:
>>>>>
>>>>> hduser@Roosevelt:/usr/local/hadoop$ ./hduser_jobs.sh
>>>>> Running hduser script ...
>>>>> bin/yarn jar
>>>>>
>>>>> /usr/local/giraph/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for=
>>>>> -hadoop-2.5.1-jar-with-dependencies.jar
>>>>> org.apache.giraph.GiraphRunner
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation -vif
>>>>> org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
>>>>> -vip /user/hduser/input/tiny_graph.txt -vof
>>>>> org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op
>>>>> /user/hduser/output/shortestpaths -w 1 -yj
>>>>>
>>>>> giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar
>>>>> 14/11/18 05:36:05 WARN util.NativeCodeLoader: Unable to load
>>>>> native-hadoop
>>>>> library for your platform... using builtin-java classes where
>>>>> applicable
>>>>> 14/11/18 05:36:08 INFO utils.ConfigurationUtils: No edge input format
>>>>> specified. Ensure your InputFormat does not require one.
>>>>> 14/11/18 05:36:08 INFO utils.ConfigurationUtils: No edge output format
>>>>> specified. Ensure your OutputFormat does not require one.
>>>>> 14/11/18 05:36:08 INFO yarn.GiraphYarnClient: Final output path is:
>>>>> hdfs://localhost:9000/user/hduser/output/shortestpaths
>>>>> 14/11/18 05:36:08 INFO yarn.GiraphYarnClient: Running Client
>>>>> 14/11/18 05:36:08 INFO client.RMProxy: Connecting to ResourceManager at
>>>>> localhost/127.0.0.1:8050
>>>>> 14/11/18 05:36:09 INFO yarn.GiraphYarnClient: Got node report from ASM
>>>>> for,
>>>>> nodeId=3Droosevelt:60486, nodeAddress roosevelt:8042, nodeRackName
>>>>> /default-rack, nodeNumContainers 0
>>>>> 14/11/18 05:36:09 INFO yarn.GiraphYarnClient: Defaulting per-task heap
>>>>> size
>>>>> to 1024MB.
>>>>> 14/11/18 05:36:09 INFO yarn.GiraphYarnClient: Obtained new Application
>>>>> ID:
>>>>> application_1416310496070_0001
>>>>> 14/11/18 05:36:09 INFO Configuration.deprecation: mapred.job.id is
>>>>> deprecated. Instead, use mapreduce.job.id
>>>>> 14/11/18 05:36:09 INFO yarn.GiraphYarnClient: Set the environment for
>>>>> the
>>>>> application master
>>>>> 14/11/18 05:36:09 INFO yarn.GiraphYarnClient: Environment for AM
>>>>>
>>>>> :{CLASSPATH=3D${CLASSPATH}:./*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/h=
>>>>>
>>>>> adoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_H=
>>>>>
>>>>> OME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_Y=
>>>>>
>>>>> ARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HAD=
>>>>>
>>>>> OOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/m=
>>>>> apreduce/lib/*}
>>>>> 14/11/18 05:36:09 INFO yarn.GiraphYarnClient: buildLocalResourceMap
>>>>> ....
>>>>> 14/11/18 05:36:12 INFO yarn.YarnUtils: Registered file in
>>>>> LocalResources ::
>>>>>
>>>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416310=
>>>>> 496070_0001/giraph-conf.xml
>>>>> 14/11/18 05:36:12 INFO yarn.GiraphYarnClient: LIB JARS
>>>>>
>>>>> :giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar
>>>>> 14/11/18 05:36:12 INFO yarn.YarnUtils: Class path name .
>>>>> 14/11/18 05:36:12 INFO yarn.YarnUtils: base path checking .
>>>>> 14/11/18 05:36:12 INFO yarn.GiraphYarnClient: Made local resource for
>>>>>
>>>>> :/usr/local/hadoop/share/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-ja=
>>>>> r-with-dependencies.jar
>>>>> to
>>>>>
>>>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416310=
>>>>>
>>>>> 496070_0001/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-depend=
>>>>> encies.jar
>>>>> 14/11/18 05:36:16 INFO yarn.YarnUtils: Registered file in
>>>>> LocalResources ::
>>>>>
>>>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416310=
>>>>>
>>>>> 496070_0001/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-depend=
>>>>> encies.jar
>>>>> 14/11/18 05:36:16 INFO yarn.GiraphYarnClient:
>>>>> ApplicationSumbissionContext
>>>>> for GiraphApplicationMaster launch container is populated.
>>>>> 14/11/18 05:36:16 INFO yarn.GiraphYarnClient: Submitting application
>>>>> to ASM
>>>>> 14/11/18 05:36:16 INFO impl.YarnClientImpl: Submitted application
>>>>> application_1416310496070_0001
>>>>> 14/11/18 05:36:16 INFO yarn.GiraphYarnClient: Got new appId after
>>>>> submission :application_1416310496070_0001
>>>>> 14/11/18 05:36:16 INFO yarn.GiraphYarnClient: GiraphApplicationMaster
>>>>> container request was submitted to ResourceManager for job: Giraph:
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation
>>>>> 14/11/18 05:36:17 INFO yarn.GiraphYarnClient: Giraph:
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed:
>>>>> 1.01
>>>>> secs
>>>>> 14/11/18 05:36:17 INFO yarn.GiraphYarnClient:
>>>>> appattempt_1416310496070_0001_000001, State: ACCEPTED, Containers
>>>>> used: 1
>>>>> 14/11/18 05:36:22 INFO yarn.GiraphYarnClient: Giraph:
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed:
>>>>> 5.86
>>>>> secs
>>>>> 14/11/18 05:36:22 INFO yarn.GiraphYarnClient:
>>>>> appattempt_1416310496070_0001_000001, State: ACCEPTED, Containers
>>>>> used: 1
>>>>> 14/11/18 05:36:26 INFO yarn.GiraphYarnClient: Giraph:
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed:
>>>>> 10.02
>>>>> secs
>>>>> 14/11/18 05:36:26 INFO yarn.GiraphYarnClient:
>>>>> appattempt_1416310496070_0001_000001, State: ACCEPTED, Containers
>>>>> used: 1
>>>>> 14/11/18 05:36:30 INFO yarn.GiraphYarnClient: Giraph:
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed:
>>>>> 14.39
>>>>> secs
>>>>> 14/11/18 05:36:30 INFO yarn.GiraphYarnClient:
>>>>> appattempt_1416310496070_0001_000001, State: ACCEPTED, Containers
>>>>> used: 1
>>>>> 14/11/18 05:36:34 INFO yarn.GiraphYarnClient: Giraph:
>>>>> org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed:
>>>>> 18.57
>>>>> secs
>>>>> 14/11/18 05:36:34 INFO yarn.GiraphYarnClient:
>>>>> appattempt_1416310496070_0001_000001, State: RUNNING, Containers used:
>>>>> 2
>>>>>
>>>>> gam_stderr.log
>>>>>
>>>>> LF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/applicatio=
>>>>>
>>>>> n_1416310496070_0001/filecache/10/giraph-examples-1.1.0-SNAPSHOT-for-hadoop=
>>>>>
>>>>> -2.5.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar=
>>>>> !/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>> explana=
>>>>> tion.
>>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>>>
>>>>> gam_stdout.log
>>>>>
>>>>> 2014-11-18 05:36:28,143 INFO  [main] yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:main(421)) - Starting GitaphAM
>>>>> 2014-11-18 05:36:30,404 WARN  [main] util.NativeCodeLoader
>>>>> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop
>>>>> library for your platform... using builtin-java classes where
>>>>> applicable
>>>>> 2014-11-18 05:36:32,649 INFO  [main] yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:<init>(168)) - GiraphAM  for ContainerId
>>>>> container_1416310496070_0001_01_000001 ApplicationAttemptId
>>>>> appattempt_1416310496070_0001_000001
>>>>> 2014-11-18 05:36:32,852 INFO  [main] client.RMProxy
>>>>> (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at
>>>>> localhost/127.0.0.1:8030
>>>>> 2014-11-18 05:36:32,888 INFO  [main] impl.NMClientAsyncImpl
>>>>> (NMClientAsyncImpl.java:serviceInit(107)) - Upper bound of the thread
>>>>> pool size is 500
>>>>> 2014-11-18 05:36:32,889 INFO  [main]
>>>>> impl.ContainerManagementProtocolProxy
>>>>> (ContainerManagementProtocolProxy.java:<init>(78)) -
>>>>> yarn.client.max-nodemanagers-proxies : 500
>>>>> 2014-11-18 05:36:33,589 INFO  [main] yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:setupContainerAskForRM(279)) - Requested
>>>>> container ask: Capability[<memory:1024, vCores:0>]Priority[10]
>>>>> 2014-11-18 05:36:33,617 INFO  [main] yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:setupContainerAskForRM(279)) - Requested
>>>>> container ask: Capability[<memory:1024, vCores:0>]Priority[10]
>>>>> 2014-11-18 05:36:33,617 INFO  [main] yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:run(185)) - Wait to finish ..
>>>>> 2014-11-18 05:36:35,674 INFO  [AMRM Heartbeater thread]
>>>>> impl.AMRMClientImpl (AMRMClientImpl.java:populateNMTokens(299)) -
>>>>> Received new token for : roosevelt:60486
>>>>> 2014-11-18 05:36:35,681 INFO  [AMRM Callback Handler Thread]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:onContainersAllocated(605)) - Got
>>>>> response from RM for container ask, allocatedCnt=3D1
>>>>> 2014-11-18 05:36:35,681 INFO  [AMRM Callback Handler Thread]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:onContainersAllocated(608)) - Total
>>>>> allocated # of container so far : 1 allocated out of 2 required.
>>>>> 2014-11-18 05:36:35,681 INFO  [AMRM Callback Handler Thread]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:startContainerLaunchingThreads(359)) -
>>>>> Launching command on a new container.,
>>>>> containerId=3Dcontainer_1416310496070_0001_01_000002,
>>>>> containerNode=3Droosevelt:60486, containerNodeURI=3Droosevelt:8042,
>>>>> containerResourceMemory=3D1024
>>>>> 2014-11-18 05:36:35,688 INFO  [pool-4-thread-1]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:buildContainerLaunchContext(492)) -
>>>>> Setting up container launch container for
>>>>> containerid=3Dcontainer_1416310496070_0001_01_000002
>>>>> 2014-11-18 05:36:35,720 INFO  [pool-4-thread-1]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:buildContainerLaunchContext(498)) -
>>>>> Conatain launch Commands :java -Xmx1024M -Xms1024M -cp .:${CLASSPATH}
>>>>> org.apache.giraph.yarn.GiraphYarnTask 1416310496070 1 2 1
>>>>> 1><LOG_DIR>/task-2-stdout.log 2><LOG_DIR>/task-2-stderr.log
>>>>> 2014-11-18 05:36:35,721 INFO  [pool-4-thread-1]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:buildContainerLaunchContext(518)) -
>>>>> Setting username in ContainerLaunchContext to: hduser
>>>>> 2014-11-18 05:36:36,789 INFO  [AMRM Callback Handler Thread]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:onContainersAllocated(605)) - Got
>>>>> response from RM for container ask, allocatedCnt=3D1
>>>>> 2014-11-18 05:36:36,789 INFO  [AMRM Callback Handler Thread]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:onContainersAllocated(608)) - Total
>>>>> allocated # of container so far : 2 allocated out of 2 required.
>>>>> 2014-11-18 05:36:36,790 INFO  [AMRM Callback Handler Thread]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:startContainerLaunchingThreads(359)) -
>>>>> Launching command on a new container.,
>>>>> containerId=3Dcontainer_1416310496070_0001_01_000003,
>>>>> containerNode=3Droosevelt:60486, containerNodeURI=3Droosevelt:8042,
>>>>> containerResourceMemory=3D1024
>>>>> 2014-11-18 05:36:37,227 INFO  [pool-4-thread-2]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:buildContainerLaunchContext(492)) -
>>>>> Setting up container launch container for
>>>>> containerid=3Dcontainer_1416310496070_0001_01_000003
>>>>> 2014-11-18 05:36:37,227 INFO  [pool-4-thread-2]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:buildContainerLaunchContext(498)) -
>>>>> Conatain launch Commands :java -Xmx1024M -Xms1024M -cp .:${CLASSPATH}
>>>>> org.apache.giraph.yarn.GiraphYarnTask 1416310496070 1 3 1
>>>>> 1><LOG_DIR>/task-3-stdout.log 2><LOG_DIR>/task-3-stderr.log
>>>>> 2014-11-18 05:36:37,227 INFO  [pool-4-thread-2]
>>>>> yarn.GiraphApplicationMaster
>>>>> (GiraphApplicationMaster.java:buildContainerLaunchContext(518)) -
>>>>> Setting username in ContainerLaunchContext to: hduser
>>>>> 2014-11-18 05:36:37,463 INFO  [pool-4-thread-1] yarn.YarnUtils
>>>>> (YarnUtils.java:addFsResourcesToMap(72)) - Adding
>>>>>
>>>>> giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar
>>>>> to LocalResources for export.to
>>>>>
>>>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416310=
>>>>>
>>>>> 496070_0001/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-depend=
>>>>> encies.jar
>>>>> 2014-11-18 05:36:37,678 INFO  [pool-4-thread-1] yarn.YarnUtils
>>>>> (YarnUtils.java:addFileToResourceMap(160)) - Registered file in
>>>>> LocalResources ::
>>>>>
>>>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416310=
>>>>>
>>>>> 496070_0001/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-depend=
>>>>> encies.jar
>>>>> 2014-11-18 05:36:37,678 WARN  [pool-4-thread-1] yarn.YarnUtils
>>>>> (YarnUtils.java:addFsResourcesToMap(81)) - Job jars (-yj option)
>>>>> didn't include giraph-core.
>>>>> 2014-11-18 05:36:37,680 INFO  [pool-4-thread-1] yarn.YarnUtils
>>>>> (YarnUtils.java:addFileToResourceMap(160)) - Registered file in
>>>>> LocalResources ::
>>>>>
>>>>> hdfs://localhost:9000/user/hduser/giraph_yarn_jar_cache/application_1416310=
>>>>> 496070_0001/giraph-conf.xml
>>>>> 2014-11-18 05:36:37,694 INFO
>>>>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0]
>>>>> impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing
>>>>> Event EventType: START_CONTAINER for Container
>>>>> container_1416310496070_0001_01_000002
>>>>> 2014-11-18 05:36:37,699 INFO
>>>>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #1]
>>>>> impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing
>>>>> Event EventType: START_CONTAINER for Container
>>>>> container_1416310496070_0001_01_000003
>>>>> 2014-11-18 05:36:37,698 INFO
>>>>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0]
>>>>> impl.ContainerManagementProtocolProxy
>>>>> (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy
>>>>> : roosevelt:60486
>>>>> 2014-11-18 05:36:37,761 INFO
>>>>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #2]
>>>>> impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing
>>>>> Event EventType: QUERY_CONTAINER for Container
>>>>> container_1416310496070_0001_01_000003
>>>>> 2014-11-18 05:36:37,771 INFO
>>>>> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #3]
>>>>> impl.NMClientAsyncImpl (NMClientAsyncImpl.java:run(531)) - Processing
>>>>> Event EventType: QUERY_CONTAINER for Container
>>>>> container_1416310496070_0001_01_000002
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Nov 20, 2014 at 1:32 PM, Alessandro Negro <alenegro81@yahoo.it
>>>>> >
>>>>> wrote:
>>>>>
>>>>> Hi Das,
>>>>> I mean as user you run Hadoop yarn resource manager.
>>>>>
>>>>> Alessandro
>>>>>
>>>>> Il giorno 20/nov/2014, alle ore 19:22, D Adams <dadamszx@gmail.com> ha
>>>>> scritto:
>>>>>
>>>>> Alessandro,
>>>>>     I'm not sure what you mean, should I create a new user on my system
>>>>> named 'yarn'? I'm new to both hadoop and giraph, so I'm not sure.
>>>>>
>>>>> Tripti,
>>>>>     I'll get those logs as soon as I can.
>>>>>
>>>>> Thank you both,
>>>>>
>>>>> V/r
>>>>> Das
>>>>> On Nov 20, 2014 3:13 AM, "Alessandro Negro" <alenegro81@yahoo.it>
>>>>> wrote:
>>>>>
>>>>> Hi Tripti,
>>>>> I agree that a more detailed error log could be useful.
>>>>>
>>>>> Thanks,
>>>>> Alessandro
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Il giorno 19/nov/2014, alle ore 15:54, Tripti Singh <
>>>>> tripti@yahoo-inc.co=
>>>>>
>>>>>  m>
>>>>>
>>>>> ha scritto:
>>>>>
>>>>> --_000_D092AB9C1CB42triptiyahooinccom_
>>>>> Content-Type: text/plain; charset=3D"Windows-1252"
>>>>> Content-Transfer-Encoding: quoted-printable
>>>>>
>>>>> Hi Alessandro,
>>>>>
>>>>> I guess u r talking about the documentation on Giraph webpage which
>>>>>
>>>>> mention=3D
>>>>>
>>>>> s addition of this new option which is more or less mandatory for yarn
>>>>>
>>>>> base=3D
>>>>>
>>>>> d profiles.
>>>>> When I first ran the Giraph on yarn without the proper =3D96yj option,
>>>>>
>>>>> there =3D
>>>>>
>>>>> was no way I could figure out from the Application/Container logs that
>>>>>
>>>>> this=3D
>>>>>
>>>>> was the issue.
>>>>> I think, it=3D92ll be a good idea to have this message in the logs for
>>>>>
>>>>> easy d=3D
>>>>>
>>>>> ebugging.
>>>>>
>>>>> Thanks,
>>>>> Tripti.
>>>>>
>>>>>
>>>>> From: Alessandro Negro <alenegro81@yahoo.it<mailto:alenegro81@yahoo.it
>>>>> <alenegro81@yahoo.it>
>>>>>
>>>>>
>>>>>  Reply-To: "user@giraph.apache.org<mailto:user@giraph.apache.org
>>>>> <user@giraph.apache.org>>"
>>>>>
>>>>> <user@gir=3D
>>>>>
>>>>> aph.apache.org<mailto:user@giraph.apache.org <user@giraph.apache.org>
>>>>> >>
>>>>> Date: Tuesday, November 18, 2014 at 4:31 PM
>>>>> To: "user@giraph.apache.org<mailto:user@giraph.apache.org
>>>>> <user@giraph.apache.org>>" <
>>>>>
>>>>> user@giraph.ap=3D
>>>>>
>>>>> ache.org<mailto:user@giraph.apache.org <user@giraph.apache.org>>>
>>>>> Subject: Re: Help with Giraph on Yarn
>>>>>
>>>>> Hi Eli,
>>>>> no I think that the help message is clear enough, generally when I rea=
>>>>>
>>>>>  d
>>>>>
>>>>> =3D93=3D
>>>>>
>>>>> jar=3D94 I always mean absolute path for jar, but this is a my persona=
>>>>>
>>>>>  l
>>>>>
>>>>> misun=3D
>>>>>
>>>>> derstanding.
>>>>>
>>>>> Thanks,
>>>>> Alessandro
>>>>>
>>>>>
>>>>> Il giorno 16/nov/2014, alle ore 21:39, Eli Reisman <
>>>>>
>>>>> apache.mailbox@gmail.co=3D
>>>>>
>>>>> m<mailto:apache.mailbox@gmail.com <apache.mailbox@gmail.com>>> ha
>>>>> scritto:
>>>>>
>>>>> Sounds like you got it figured out since last time I checked this list=
>>>>>
>>>>>  ,
>>>>>
>>>>> sor=3D
>>>>>
>>>>> ry it was a pain. Feel free to drop a JIRA ticket if we can make help
>>>>>
>>>>> messa=3D
>>>>>
>>>>> ge etc. for -yj option more clear, there's lots to improve there.
>>>>>
>>>>> On Sat, Nov 8, 2014 at 7:26 AM, Alessandro Negro <alenegro81@yahoo.it
>>>>>
>>>>> <mailt=3D
>>>>>
>>>>> o:alenegro81@yahoo.it>> wrote:
>>>>> Hi Tripti,
>>>>> finally I was able to run the test with success. It was an issue of
>>>>>
>>>>> permiss=3D
>>>>>
>>>>> ion since I was running as ale not as yarn.
>>>>> Let me say that now I=3D92m able to run graph examples on Yarn 2.5.1.
>>>>>
>>>>> This is=3D
>>>>>
>>>>> the final result:
>>>>>
>>>>> 14/11/08 16:24:00 INFO yarn.GiraphYarnClient: Completed Giraph:
>>>>>
>>>>> org.apache.=3D
>>>>>
>>>>> giraph.examples.SimpleShortestPathsComputation: SUCCEEDED, total
>>>>>
>>>>> running ti=3D
>>>>>
>>>>> me: 0 minutes, 21 seconds.
>>>>>
>>>>>
>>>>> Many thanks for your support,
>>>>> Alessandro
>>>>>
>>>>> Il giorno 06/nov/2014, alle ore 15:16, Tripti Singh <
>>>>>
>>>>> tripti@yahoo-inc.com<m=3D
>>>>>
>>>>> ailto:tripti@yahoo-inc.com>> ha scritto:
>>>>>
>>>>> I don't know if u have access to this node. But if u do, u could check
>>>>>
>>>>> if t=3D
>>>>>
>>>>> he file is indeed there and u have access to it.
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On 06-Nov-2014, at 6:12 pm, "Alessandro Negro" <alenegro81@yahoo.it
>>>>>
>>>>> <mailto:=3D <=3D>
>>>>>
>>>>> alenegro81@yahoo.it>> wrote:
>>>>>
>>>>> You are right it works, but now I receive the following error:
>>>>>
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/private/tmp/hadoop-yarn/nm-local-dir/use=3D
>>>>>
>>>>>
>>>>>
>>>>> rcache/ale/appcache/application_1415264041937_0009/filecache/10/giraph-e=
>>>>>
>>>>>  xam=3D
>>>>>
>>>>>
>>>>>
>>>>> ples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar!/org/slf4=
>>>>>
>>>>>  j/i=3D
>>>>>
>>>>>  mpl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/opt/yarn/hadoop-2.5.1/share/hadoop/commo=3D
>>>>>
>>>>> n/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class=
>>>>>
>>>>>  ]
>>>>>
>>>>>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>>
>>>>> explana=3D
>>>>>
>>>>> tion.
>>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>>> 2014-11-06 13:15:37.120 java[10158:1803] Unable to load realm info fro=
>>>>>
>>>>>  m
>>>>>
>>>>> SCD=3D
>>>>>
>>>>> ynamicStore
>>>>> Exception in thread "pool-4-thread-1" java.lang.IllegalStateException:
>>>>>
>>>>> Coul=3D
>>>>>
>>>>> d not configure the containerlaunch context for GiraphYarnTasks.
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.giraph.yarn.GiraphApplicationMaster.getTaskResourceMap(Giraph=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  ApplicationMaster.java:391)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.giraph.yarn.GiraphApplicationMaster.access$500(GiraphApplicat=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  ionMaster.java:78)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.giraph.yarn.GiraphApplicationMaster$LaunchContainerRunnable.b=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  uildContainerLaunchContext(GiraphApplicationMaster.java:522)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.giraph.yarn.GiraphApplicationMaster$LaunchContainerRunnable.r=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  un(GiraphApplicationMaster.java:479)
>>>>> at
>>>>>
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.jav=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  a:1145)
>>>>> at
>>>>>
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  va:615)
>>>>> at java.lang.Thread.run(Thread.java:744)
>>>>> Caused by: java.io.FileNotFoundException: File does not exist:
>>>>>
>>>>> hdfs://hadoo=3D
>>>>>
>>>>>
>>>>>
>>>>> p-master:9000/user/yarn/giraph_yarn_jar_cache/application_1415264041937_=
>>>>>
>>>>>  000=3D
>>>>>
>>>>>
>>>>>
>>>>> 9/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.=
>>>>>
>>>>>  jar
>>>>>
>>>>>  at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSy=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  stem.java:1072)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSy=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  stem.java:1064)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolv=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  er.java:81)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFi=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  leSystem.java:1064)
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.giraph.yarn.YarnUtils.addFileToResourceMap(YarnUtils.java:153=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  )
>>>>> at
>>>>>
>>>>> org.apache.giraph.yarn.YarnUtils.addFsResourcesToMap(YarnUtils.java:77)
>>>>>
>>>>> at
>>>>>
>>>>>
>>>>> org.apache.giraph.yarn.GiraphApplicationMaster.getTaskResourceMap(Giraph=
>>>>>
>>>>>  =3D
>>>>>
>>>>>  ApplicationMaster.java:387)
>>>>> ... 6 more
>>>>>
>>>>>
>>>>> That justify the other error I receive in the task:
>>>>> Could not find or load main class org.apache.giraph.yarn.GiraphYarnTas=
>>>>>
>>>>>  k
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Il giorno 06/nov/2014, alle ore 13:07, Tripti Singh <
>>>>>
>>>>> tripti@yahoo-inc.com<m=3D
>>>>>
>>>>> ailto:tripti@yahoo-inc.com>> ha scritto:
>>>>>
>>>>> Why r u adding two jars? Example jar ideally contains core library so
>>>>>
>>>>> every=3D
>>>>>
>>>>> thing should be available with just one example jar included
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On 06-Nov-2014, at 4:33 pm, "Alessandro Negro" <alenegro81@yahoo.it
>>>>>
>>>>> <mailto:=3D <=3D>
>>>>>
>>>>> alenegro81@yahoo.it>> wrote:
>>>>>
>>>>> Hi,
>>>>> now it seems better, I need to add:
>>>>>
>>>>>
>>>>>
>>>>> giraph-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar,giraph-=
>>>>>
>>>>>  exa=3D
>>>>>
>>>>>  mples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar
>>>>>
>>>>>
>>>>> Now it seems that after a lot of cycle it fail with this error:
>>>>>
>>>>> Could not find or load main class org.apache.giraph.yarn.GiraphYarnTas=
>>>>>
>>>>>  k
>>>>>
>>>>>
>>>>> But in this case the error appear in task-3-stderr.log  not in
>>>>>
>>>>> gam-stderr.l=3D
>>>>>
>>>>> og where there is the following error:
>>>>>
>>>>> LF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/private/tmp/hadoop-yarn/nm-local-dir/use=3D
>>>>>
>>>>>
>>>>>
>>>>> rcache/ale/appcache/application_1415264041937_0006/filecache/12/giraph-1=
>>>>>
>>>>>  .1.=3D
>>>>>
>>>>>
>>>>>
>>>>> 0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar!/org/slf4j/impl/St=
>>>>>
>>>>>  ati=3D
>>>>>
>>>>>  cLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/private/tmp/hadoop-yarn/nm-local-dir/use=3D
>>>>>
>>>>>
>>>>>
>>>>> rcache/ale/appcache/application_1415264041937_0006/filecache/10/giraph-e=
>>>>>
>>>>>  xam=3D
>>>>>
>>>>>
>>>>>
>>>>> ples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar!/org/slf4=
>>>>>
>>>>>  j/i=3D
>>>>>
>>>>>  mpl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>>
>>>>> [jar:file:/opt/yarn/hadoop-2.5.1/share/hadoop/commo=3D
>>>>>
>>>>> n/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class=
>>>>>
>>>>>  ]
>>>>>
>>>>>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>

Mime
View raw message