hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 牛兆捷 <nzjem...@gmail.com>
Subject Re: container memory resource request
Date Wed, 24 Apr 2013 05:16:40 GMT
I use hadoop 2.0.3-alpha
I also attach configure file , mapred-site.xml and yarn-site.xml


2013/4/24 Hitesh Shah <hitesh@hortonworks.com>

> As some folks have mentioned earlier, it is very likely that
> "yarn.scheduler.minimum-allocation-mb" is set to 2048 in yarn-site.xml.
>
> If this is set to something different, it might be helpful to provide what
> version of hadoop you are running as
> well as a copy of your yarn-site.xml from the node running the
> ResourceManager.
>
> -- Hitesh
>
> On Apr 23, 2013, at 8:52 PM, 牛兆捷 wrote:
>
> > Why the memory of map task are 2048 rather than 900(1024)?
> >
> >
> > 2013/4/24 牛兆捷 <nzjemail@gmail.com>
> >
> >>
> >> Map task container:
> >>
> >> 2013-04-24 01:14:06,398 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from NEW
> to
> >> ALLOCATED
> >> 2013-04-24 01:14:06,398 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hustnn
> >> OPERATION=AM Allocated Container        TARGET=SchedulerApp
> >> RESULT=SUCCESS  APPID=application_1366737158682_0002
> >> CONTAINERID=container_1366737158682_0002_01_000002
> >> 2013-04-24 01:14:06,398 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode:
> >> Assigned container container_1366737158682_0002_01_000002 of capacity
> >> <memory:2048, vCores:1> on host compute-0-0.local:44082, which currently
> >> has 2 containers, <memory:4096, vCores:2> used and <memory:20480,
> >> vCores:46> available
> >> 2013-04-24 01:14:06,400 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> >> assignedContainer application=application_1366737158682_0002
> >> container=Container: [ContainerId:
> container_1366737158682_0002_01_000002,
> >> NodeId: compute-0-0.local:44082, NodeHttpAddress:
> compute-0-0.local:8042,
> >> Resource: <memory:2048, vCores:1>, Priority: 20, State: NEW, Token:
> null,
> >> Status: container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 2, }, state:
> >> C_NEW, ] containerId=container_1366737158682_0002_01_000002
> queue=default:
> >> capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048,
> >> vCores:1>usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336,
> >> numApps=1, numContainers=1 usedCapacity=0.083333336
> >> absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:1>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:06,400 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> Re-sorting queues since queue: root.default stats: default:
> capacity=1.0,
> >> absoluteCapacity=1.0, usedResources=<memory:4096,
> >> vCores:2>usedCapacity=0.16666667, absoluteUsedCapacity=0.16666667,
> >> numApps=1, numContainers=2
> >> 2013-04-24 01:14:06,400 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> assignedContainer queue=root usedCapacity=0.16666667
> >> absoluteUsedCapacity=0.16666667 used=<memory:4096, vCores:2>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:07,015 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from
> >> ALLOCATED to ACQUIRED
> >> 2013-04-24 01:14:07,405 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from
> ACQUIRED
> >> to RUNNING
> >> 2013-04-24 01:14:13,920 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from
> RUNNING
> >> to COMPLETED
> >>
> >> reduce task container:
> >> 2013-04-24 01:14:14,923 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from NEW
> to
> >> ALLOCATED
> >> 2013-04-24 01:14:14,923 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hustnn
> >> OPERATION=AM Allocated Container        TARGET=SchedulerApp
> >> RESULT=SUCCESS  APPID=application_1366737158682_0002
> >> CONTAINERID=container_1366737158682_0002_01_000003
> >> 2013-04-24 01:14:14,923 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode:
> >> Assigned container container_1366737158682_0002_01_000003 of capacity
> >> <memory:3072, vCores:1> on host compute-0-0.local:44082, which currently
> >> has 2 containers, <memory:5120, vCores:2> used and <memory:19456,
> >> vCores:46> available
> >> 2013-04-24 01:14:14,924 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> >> assignedContainer application=application_1366737158682_0002
> >> container=Container: [ContainerId:
> container_1366737158682_0002_01_000003,
> >> NodeId: compute-0-0.local:44082, NodeHttpAddress:
> compute-0-0.local:8042,
> >> Resource: <memory:3072, vCores:1>, Priority: 10, State: NEW, Token:
> null,
> >> Status: container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 3, }, state:
> >> C_NEW, ] containerId=container_1366737158682_0002_01_000003
> queue=default:
> >> capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048,
> >> vCores:1>usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336,
> >> numApps=1, numContainers=1 usedCapacity=0.083333336
> >> absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:1>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:14,924 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> Re-sorting queues since queue: root.default stats: default:
> capacity=1.0,
> >> absoluteCapacity=1.0, usedResources=<memory:5120,
> >> vCores:2>usedCapacity=0.20833333, absoluteUsedCapacity=0.20833333,
> >> numApps=1, numContainers=2
> >> 2013-04-24 01:14:14,924 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> assignedContainer queue=root usedCapacity=0.20833333
> >> absoluteUsedCapacity=0.20833333 used=<memory:5120, vCores:2>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:15,070 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from
> >> ALLOCATED to ACQUIRED
> >> 2013-04-24 01:14:15,929 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from
> ACQUIRED
> >> to RUNNING
> >> 2013-04-24 01:14:21,652 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from
> RUNNING
> >> to COMPLETED
> >>
> >> AM container:
> >>
> >> 2013-04-24 01:13:59,370 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000001 Container Transitioned from NEW
> to
> >> ALLOCATED
> >> 2013-04-24 01:13:59,370 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hustnn
> >> OPERATION=AM Allocated Container        TARGET=SchedulerApp
> >> RESULT=SUCCESS  APPID=application_1366737158682_0002
> >> CONTAINERID=container_1366737158682_0002_01_000001
> >> 2013-04-24 01:13:59,370 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode:
> >> Assigned container container_1366737158682_0002_01_000001 of capacity
> >> <memory:2048, vCores:1> on host compute-0-0.local:44082, which currently
> >> has 1 containers, <memory:2048, vCores:1> used and <memory:22528,
> >> vCores:47> available
> >> 2013-04-24 01:13:59,374 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> >> assignedContainer application=application_1366737158682_0002
> >> container=Container: [ContainerId:
> container_1366737158682_0002_01_000001,
> >> NodeId: compute-0-0.local:44082, NodeHttpAddress:
> compute-0-0.local:8042,
> >> Resource: <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null,
> >> Status: container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 1, }, state:
> >> C_NEW, ] containerId=container_1366737158682_0002_01_000001
> queue=default:
> >> capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0,
> >> vCores:0>usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1,
> >> numContainers=0 usedCapacity=0.0 absoluteUsedCapacity=0.0
> used=<memory:0,
> >> vCores:0> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:13:59,374 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> Re-sorting queues since queue: root.default stats: default:
> capacity=1.0,
> >> absoluteCapacity=1.0, usedResources=<memory:2048,
> >> vCores:1>usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336,
> >> numApps=1, numContainers=1
> >> 2013-04-24 01:13:59,374 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> assignedContainer queue=root usedCapacity=0.083333336
> >> absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:1>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:13:59,376 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000001 Container Transitioned from
> >> ALLOCATED to ACQUIRED
> >> 2013-04-24 01:13:59,377 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> Storing attempt: AppId: application_1366737158682_0002 AttemptId:
> >> appattempt_1366737158682_0002_000001 MasterContainer: Container:
> >> [ContainerId: container_1366737158682_0002_01_000001, NodeId:
> >> compute-0-0.local:44082, NodeHttpAddress: compute-0-0.local:8042,
> Resource:
> >> <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null, Status:
> >> container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 1, }, state:
> >> C_NEW, ]
> >> 2013-04-24 01:13:59,379 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> appattempt_1366737158682_0002_000001 State change from SCHEDULED to
> >> ALLOCATED_SAVING
> >> 2013-04-24 01:13:59,381 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore:
> >> Storing info for attempt: appattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,383 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> appattempt_1366737158682_0002_000001 State change from ALLOCATED_SAVING
> to
> >> ALLOCATED
> >> 2013-04-24 01:13:59,389 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> >> Launching masterappattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,414 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> >> Setting up container Container: [ContainerId:
> >> container_1366737158682_0002_01_000001, NodeId: compute-0-0.local:44082,
> >> NodeHttpAddress: compute-0-0.local:8042, Resource: <memory:2048,
> vCores:1>,
> >> Priority: 0, State: NEW, Token: null, Status: container_id {,
> >> app_attempt_id {, application_id {, id: 2, cluster_timestamp:
> >> 1366737158682, }, attemptId: 1, }, id: 1, }, state: C_NEW, ] for AM
> >> appattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,414 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> >> Command to launch container container_1366737158682_0002_01_000001 :
> >> $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties
> >> -Dyarn.app.mapreduce.container.log.dir=<LOG_DIR>
> >> -Dyarn.app.mapreduce.container.log.filesize=0
> -Dhadoop.root.logger=INFO,CLA
> >> -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> >> 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
> >> 2013-04-24 01:13:59,968 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> Done
> >> launching container Container: [ContainerId:
> >> container_1366737158682_0002_01_000001, NodeId: compute-0-0.local:44082,
> >> NodeHttpAddress: compute-0-0.local:8042, Resource: <memory:2048,
> vCores:1>,
> >> Priority: 0, State: NEW, Token: null, Status: container_id {,
> >> app_attempt_id {, application_id {, id: 2, cluster_timestamp:
> >> 1366737158682, }, attemptId: 1, }, id: 1, }, state: C_NEW, ] for AM
> >> appattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,968 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> appattempt_1366737158682_0002_000001 State change from ALLOCATED to
> LAUNCHED
> >> 2013-04-24 01:14:00,365 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000001 Container Transitioned from
> ACQUIRED
> >> to RUNNING
> >>
> >>
> >> 2013/4/24 Zhijie Shen <zshen@hortonworks.com>
> >>
> >>> Would you please look into the resourcemanager log, and check how many
> >>> containers are allocated and what the allocated memory is? You may
> want to
> >>> search the log with "assignedContainer".
> >>>
> >>>
> >>> On Tue, Apr 23, 2013 at 10:19 AM, 牛兆捷 <nzjemail@gmail.com> wrote:
> >>>
> >>>> I config them in mapred-site.xml like below, I set them less then 1000
> >>> for
> >>>> the normalization as you said:
> >>>>
> >>>> "
> >>>> <property>
> >>>>    <name>yarn.app.mapreduce.am.resource.mb</name>
> >>>>    <value>900</value>
> >>>>  </property>
> >>>>  <property>
> >>>>    <name>mapreduce.map.memory.mb</name>
> >>>>    <value>900</value>
> >>>>  </property>
> >>>>  <property>
> >>>>    <name>mapreduce.reduce.memory.mb</name>
> >>>>    <value>900</value>
> >>>>  </property>
> >>>> "
> >>>>
> >>>> Then I run just one map, as you said there are 2 contained will be
> >>>> launched, one for A/M master, the other for map task.
> >>>> However, the 2 container cost 4G memory which I see from yarn UI
> >>> interface.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> 2013/4/24 Zhijie Shen <zshen@hortonworks.com>
> >>>>
> >>>>> Do you mean the memory assigned for the container of M/R's AM? Did
> you
> >>>> set
> >>>>> ContainerLaunchContext.setResource?
> >>>>>
> >>>>> AFAIK, by default, yarn.scheduler.minimum-allocation-mb = 1024 and
> >>>>> yarn.app.mapreduce.am.resource.mb
> >>>>> = 1536. So, M/R job will request 1536 for its AM, but Yarn's
> scheduler
> >>>> will
> >>>>> normalize the request to 2048, which is no less than 1536, and is
> >>>> multiple
> >>>>> times of the min allocation.
> >>>>>
> >>>>>
> >>>>> On Tue, Apr 23, 2013 at 8:43 AM, 牛兆捷 <nzjemail@gmail.com>
wrote:
> >>>>>
> >>>>>> I am using 2.0.3-alpha, I don't set the map memory capacity
> >>> explicitly,
> >>>>>> then "resourceCapacity.setMemory" should set the default memory
> >>> request
> >>>>> to
> >>>>>> 1024mb,
> >>>>>> However 2048 Memory is assigned to this container.
> >>>>>>
> >>>>>> Why it does like this?
> >>>>>>
> >>>>>> --
> >>>>>> *Sincerely,*
> >>>>>> *Zhaojie*
> >>>>>> *
> >>>>>> *
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Zhijie Shen
> >>>>> Hortonworks Inc.
> >>>>> http://hortonworks.com/
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Sincerely,*
> >>>> *Zhaojie*
> >>>> *
> >>>> *
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Zhijie Shen
> >>> Hortonworks Inc.
> >>> http://hortonworks.com/
> >>>
> >>
> >>
> >>
> >> --
> >> *Sincerely,*
> >> *Zhaojie*
> >> *
> >> *
> >>
> >
> >
> >
> > --
> > *Sincerely,*
> > *Zhaojie*
> > *
> > *
>
>


-- 
*Sincerely,*
*Zhaojie*
*
*

Mime
View raw message