hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edward Capriolo <edlinuxg...@gmail.com>
Subject Re: Problems with region server OOME
Date Thu, 25 Mar 2010 14:57:53 GMT
On Thu, Mar 25, 2010 at 8:44 AM, Peter Falk <peter@bugsoft.nu> wrote:

> Thanks Andrew, Edward, Jonathan for the help. I will try to increase RAM
> and
> try the compressed oops suggestion.
>
> As I said, we are running data nodes and region servers on all nodes. Also,
> we are running the name node and master on one of these four, secondary
> name
> node on two of them, and zoo keepers on the two other. We tried to save RAM
> for the file cache in the system, to improve data node performance. There
> are currently about 3 - 4 GB os file cache on the nodes, except on the one
> running master and name node where it is less. Any thoughts about the size
> of the file cache for such nodes?
>
> Regarding the type of rows/columns, the biggest table and the one that also
> is most used, has few columns but large cells (about 5 kB in average). This
> table is also LZO compressed.
>
> BTW, is there any way to debug or log each find/mutate operation that is
> performed by the cluster? Even with DEBUG log level on master and region
> server, it does not seem to exist any such log message.
>
> Thanks again,
> Peter
>
> On Wed, Mar 24, 2010 at 17:24, Andrew Purtell <apurtell@apache.org> wrote:
>
> > > We would appreciate tips/information of how to change the
> > > configuration so that OOME probability is minimized.
> >
> > Try running with 4GB heaps if you can.
> >
> > On recent JVMs -- but don't use 1.6.0_18! -- you can have the JVM
> compress
> > 64 bit object references into 32 bits. This will save heap at minor
> > performance cost. Add '-XX:UseCompressedOops' to HBASE_OPTS in
> hbase-env.sh.
> > For more information on compressed OOPS:
> > http://wikis.sun.com/display/HotSpotInternals/CompressedOops
> >
> > Also see this page on the HBase wiki for some related information:
> > http://wiki.apache.org/hadoop/PerformanceTuning, the section "HBase JVM
> > and GC"
> >
> > Hope this helps,
> >
> >   - Andy
> >
> >
> >
> >
> >
> >
>

>>There
are currently about 3 - 4 GB os file cache on the nodes, except on the one
running master and name node where it is less. Any thoughts about the size
of the file cache for such nodes?

HBase has its own block cache that can be configured to be a percentage of
hbase memory. So generally it is better to give hbase the memory directly
since the two are essentially doing the same thing.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message