hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From madhu phatak <phatak....@gmail.com>
Subject Re: Can anyone help me with large distributed cache files?
Date Tue, 12 Jun 2012 05:39:25 GMT
Hi Sheng,
By default , cache size is 10GB which means your file can be placed
in distributed cache .If you want more memory configure
  local.cache.size  in mapred-site.xml for bigger value.

On Tue, Jun 12, 2012 at 5:22 AM, Sheng Guo <enigmaguo@gmail.com> wrote:

> Hi,
>
> Sorry to bother you all, this is my first question here in hadoop user
> mailing list.
> Can anyone help me with the memory configuration if distributed cache is
> very large and requires more memory? (2GB)
>
> And also in this case when distributed cache is very large, how do we
> handle this normally? By configure something to give more memory? or this
> should be avoided?
>
> Thanks
>



-- 
https://github.com/zinnia-phatak-dev/Nectar

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message