hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From xeonmailinglist <xeonmailingl...@gmail.com>
Subject Run Yarn with 2GB of RAM and 1 CPU core
Date Sat, 02 Apr 2016 12:36:32 GMT
I have configured Hadoop Yarn in a VM with 2048MB of RAM and 1 CPU core. 
Then, I have configured the max and the min limits of memory to be used 
in Yarn in the |mapred-site.xml|

|<property> <name>yarn.scheduler.minimum-allocation-mb</name> 
<value>1024</value> </property> <property> 
<name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value>

</property> <property> 
<name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value>

</property> <property> 
<name>yarn.scheduler.maximum-allocation-vcores</name> <value>1</value>

</property> <property> <name>yarn.nodemanager.resource.memory-mb</name>

<value>2048</value> </property> <property> 
<name>yarn.nodemanager.resource.cpu-vcores</name> <value>1</value>

</property> |

But when I run an example, the count words on a file with 32K, the job 
simply won’t run. It always stays at |0%|.

I have tried to decrease both values in the |mapred-site.xml| to |1024| 
and |512|, respectively, but the job couldn’t also run. Therefore, I 
have kept the |2048| and |1024| values.

To try to understand what is going on, I have looked to the Yarn logs, 
but I haven’t find much useful information. I have just found was this 
line, but it seems that everything is ok.

|2016-04-01 12:15:51,728 INFO 

Starting resource-monitoring for container_1459527184896_0001_01_000001 
2016-04-01 12:15:51,797 INFO 

Memory usage of ProcessTree 29808 for container-id 
container_1459527184896_0001_01_000001: 48.9 MB of 2 GB physical memory 
used; 1.1 GB of 4.2 GB virtual memory used |

I don’t know why my example won’t work. Any help to debug this problem? 
Is it possible to run a job in a VM with 2048GB of RAM and 1 CPU core?


View raw message