hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Srigurunath Chakravarthi <srig...@yahoo-inc.com>
Subject RE: heap memory
Date Mon, 08 Feb 2010 18:20:49 GMT
Hi Gang,
 Not sure if I understood your question right. Responses inline:

>-----Original Message-----
>From: Gang Luo [mailto:lgpublic@yahoo.com.cn]
>Sent: Friday, February 05, 2010 8:24 AM
>To: common-user@hadoop.apache.org
>Subject: heap memory
>HI all,
>I suppose there is only map function that will consume the heap memory
>assigned to each map task. While the default heap memory is 200 mb, I just
>wonder most of the memory is wasted for a simple map function (e.g.

Map Output records get buffered before being spilt to disk. You can control this size via

>So, I try to make use of this memory by buffering the output records, or
>maintaining large data structure in memory, but it doesn't work as I
>expect. For example, I want to build a hash table on a 100mb table in
>memory during the life time of that map task. But it fails due to lack of
>heap memory. Don't I get 200mb heap memory? What others also eat my heap

You can request a larger head for Map tasks by including -Xmx

>      ___________________________________________________________
>  好玩贺卡等你发,邮箱贺卡全新上线!
View raw message