hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sheng Guo <enigma...@gmail.com>
Subject Re: Can anyone help me with large distributed cache files?
Date Tue, 12 Jun 2012 09:04:21 GMT
It works! Thanks!

On Mon, Jun 11, 2012 at 10:39 PM, madhu phatak <phatak.dev@gmail.com> wrote:

> Hi Sheng,
> By default , cache size is 10GB which means your file can be placed
> in distributed cache .If you want more memory configure
>  local.cache.size  in mapred-site.xml for bigger value.
>
> On Tue, Jun 12, 2012 at 5:22 AM, Sheng Guo <enigmaguo@gmail.com> wrote:
>
> > Hi,
> >
> > Sorry to bother you all, this is my first question here in hadoop user
> > mailing list.
> > Can anyone help me with the memory configuration if distributed cache is
> > very large and requires more memory? (2GB)
> >
> > And also in this case when distributed cache is very large, how do we
> > handle this normally? By configure something to give more memory? or this
> > should be avoided?
> >
> > Thanks
> >
>
>
>
> --
> https://github.com/zinnia-phatak-dev/Nectar
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message