hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From YouPeng Yang <yypvsxf19870...@gmail.com>
Subject Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
Date Fri, 06 Dec 2013 01:32:41 GMT
Hi

  Have your spread you config over your cluster.

  And do you take a look whether the error containers are on any concentrated
nodes?


regards


2013/12/5 panfei <cnweike@gmail.com>

> Hi YouPeng, thanks for your advice. I have read the docs and configure the
> parameters as follows:
>
> Physical Server: 8 cores CPU, 16GB memory.
>
> For YARN:
>
> yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
>
> yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
> allocation unit for the container.
>
> yarn.nodemanager.vmem-pmem-ratio is the default value 2.1
>
>
> FOR MAPREDUCE:
>
> mapreduce.map.memory.mb set to 2048 for map task containers.
>
> mapreduce.reduce.memory.mb set to 4096 for reduce task containers.
>
> mapreduce.map.java.opts set to -Xmx1536m
>
> mapreduce.reduce.java.opts set to -Xmx3072m
>
>
>
> after setting theses parameters, the problem still there, I think it's
> time to get back to HADOOP 1.0 infrastructure.
>
> thanks for your advice again.
>
>
>
> 2013/12/5 YouPeng Yang <yypvsxf19870706@gmail.com>
>
>> Hi
>>
>>  please reference to
>> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>>
>>
>>
>> 2013/12/5 panfei <cnweike@gmail.com>
>>
>>> we have already tried several values of these two parameters, but it
>>> seems no use.
>>>
>>>
>>> 2013/12/5 Tsuyoshi OZAWA <ozawa.tsuyoshi@gmail.com>
>>>
>>>> Hi,
>>>>
>>>> Please check the properties like mapreduce.reduce.memory.mb and
>>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>>> resource limits for mappers/reducers.
>>>>
>>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cnweike@gmail.com> wrote:
>>>> >
>>>> >
>>>> > ---------- Forwarded message ----------
>>>> > From: panfei <cnweike@gmail.com>
>>>> > Date: 2013/12/4
>>>> > Subject: Container
>>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> running
>>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>>> memory
>>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>>> > To: CDH Users <cdh-user@cloudera.org>
>>>> >
>>>> >
>>>> > Hi All:
>>>> >
>>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>>> all)
>>>> > jobs from hive, we get the following exception info , seems the
>>>> physical
>>>> > memory and virtual memory both not enough for the job to run:
>>>> >
>>>> >
>>>> > Task with the most failures(4):
>>>> > -----
>>>> > Task ID:
>>>> >   task_1386156666044_0001_m_000000
>>>> >
>>>> > URL:
>>>> >
>>>> >
>>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>>> > -----
>>>> > Diagnostic Messages for this Task:
>>>> > Container
>>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>>> > container.
>>>> > Dump of the process-tree for container_1386156666044_0001_01_000013
:
>>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES)
>>>> FULL_CMD_LINE
>>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>>> >
>>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>>> > -Dlog4j.configuration=container-log4j.properties
>>>> >
>>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>>> -Dhadoop.root.logger=INFO,CLA
>>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>>> > attempt_1386156666044_0001_m_000000_3 13
>>>> >
>>>> > following is some of our configuration:
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>>> >     <value>12288</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>>> >     <value>8</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>>> >     <value>false</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>>> >     <value>6</value>
>>>> >   </property>
>>>> >
>>>> > can you give me some advice? thanks a lot.
>>>> > --
>>>> > 不学习,不知道
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > 不学习,不知道
>>>>
>>>>
>>>>
>>>> --
>>>> - Tsuyoshi
>>>>
>>>
>>>
>>>
>>> --
>>> 不学习,不知道
>>>
>>
>>
>
>
> --
> 不学习,不知道
>

Mime
View raw message