hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kevin O'dell" <kevin.od...@cloudera.com>
Subject Re: Memory distribution for Hadoop/Hbase processes
Date Sun, 04 Aug 2013 14:42:55 GMT
My questions are :
1) How this thing is working ? It is working because java can over allocate
memory. You will know you are using too much memory when the kernel starts
killing processes.
2) I just have one table whose size at present is about 10-15 GB , so what
should be ideal memory distribution ? Really you should get a box with more
memory. You can currently only hold about ~400 MB in memory.
On Aug 4, 2013 9:58 AM, "Ted Yu" <yuzhihong@gmail.com> wrote:

> What OS are you using ?
>
> What is the output from the following command ?
>  ps aux | grep pid
> where pid is the process Id for Namenode, Datanode, etc.
>
> Cheers
>
> On Sun, Aug 4, 2013 at 6:33 AM, Vimal Jain <vkjk89@gmail.com> wrote:
>
> > Hi,
> > I have configured Hbase in pseudo distributed mode with HDFS as
> underlying
> > storage.I am not using map reduce framework as of now
> > I have 4GB RAM.
> > Currently i have following distribution of memory
> >
> > Data Node,Name Node,Secondary Name Node each :1000MB(default
> > HADOOP_HEAPSIZE
> > property)
> >
> > Hmaster - 512 MB
> > HRegion - 1536 MB
> > Zookeeper - 512 MB
> >
> > So total heap allocation becomes - 5.5 GB which is absurd as my total RAM
> > is only 4 GB , but still the setup is working fine on production. :-0
> >
> > My questions are :
> > 1) How this thing is working ?
> > 2) I just have one table whose size at present is about 10-15 GB , so
> what
> > should be ideal memory distribution ?
> > --
> > Thanks and Regards,
> > Vimal Jain
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message