hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: Cluster Size/Node Density
Date Sat, 19 Feb 2011 23:22:41 GMT
It would be the second report of someone having u23 being less stable
than u17 that I see in less than a week. Interesting...

J-D

On Sat, Feb 19, 2011 at 9:43 AM, Wayne <wav100@gmail.com> wrote:
> What JVM is recommended for the new memstore allocator? We swtiched from u23
> back to u17 which helped a lot. Is this optimized for a specific JVM or does
> it not matter?
>
> On Fri, Feb 18, 2011 at 5:46 PM, Todd Lipcon <todd@cloudera.com> wrote:
>>
>> On Fri, Feb 18, 2011 at 12:10 PM, Jean-Daniel Cryans
>> <jdcryans@apache.org>wrote:
>>
>> > The bigger the heap the longer the GC pause of the world when
>>
>> fragmentation requires it, 8GB is "safer".
>> >
>>
>> On my boxes, a stop-the-world on 8G heap is already around 80 seconds...
>> pretty catastrophic. Of course we've bumped the ZK timeout up to several
>> minutes these days, but it's just a bandaid.
>>
>>
>> >
>> > In 0.90.1 you can try enabling the new memstore allocator that seems
>> > to do a really good job, checkout the jira first:
>> > https://issues.apache.org/jira/browse/HBASE-3455
>> >
>> >
>> Yep. Hopefully will have time to do a blog post this weekend about it as
>> well. In my testing, try as I might, I can't get my region servers to do a
>> full GC anymore when this is enabled.
>>
>> -Todd
>>
>>
>> > On Fri, Feb 18, 2011 at 12:05 PM, Chris Tarnas <cft@email.com> wrote:
>> > > Thank you , ad that bring me to my next question...
>> > >
>> > > What is the current recommendation on the max heap size for Hbase if
>> > > RAM
>> > on the server is not an issue? Right now I am at 8GB and have no issues,
>> > can
>> > I safely do 12GB? The servers have plenty of RAM (48GB) so that should
>> > not
>> > be an issue - I just want to minimize the risk that GC will cause
>> > problems.
>> > >
>> > > thanks again.
>> > > -chris
>> > >
>> > > On Feb 18, 2011, at 11:59 AM, Jean-Daniel Cryans wrote:
>> > >
>> > >> That's what I usually recommend, the bigger the flushed files the
>> > >> better. On the other hand, you only have so much memory to dedicate
>> > >> to
>> > >> the MemStore...
>> > >>
>> > >> J-D
>> > >>
>> > >> On Fri, Feb 18, 2011 at 11:50 AM, Chris Tarnas <cft@email.com>
wrote:
>> > >>> Would it be a good idea to raise the
>> > >>> hbase.hregion.memstore.flush.size
>> > if you have really large regions?
>> > >>>
>> > >>> -chris
>> > >>>
>> > >>> On Feb 18, 2011, at 11:43 AM, Jean-Daniel Cryans wrote:
>> > >>>
>> > >>>> Less regions, but it's often a good thing if you have a lot
of data
>> > >>>> :)
>> > >>>>
>> > >>>> It's probably a good thing to bump the HDFS block size to 128
or
>> > >>>> 256MB
>> > >>>> since you know you're going to have huge-ish files.
>> > >>>>
>> > >>>> But anyway regarding penalties, I can't think of one that clearly
>> > >>>> comes out (unless you use a very small heap). The IO usage
patterns
>> > >>>> will change, but unless you flush very small files all the
time and
>> > >>>> need to recompact them into much bigger ones, then it shouldn't
>> > >>>> really
>> > >>>> be an issue.
>> > >>>>
>> > >>>> J-D
>> > >>>>
>> > >>>> On Fri, Feb 18, 2011 at 11:36 AM, Jason Rutherglen
>> > >>>> <jason.rutherglen@gmail.com> wrote:
>> > >>>>>>  We are also using a 5Gb region size to keep our region
>> > >>>>>> counts in the 100-200 range/node per Jonathan Grey's
>> > >>>>>> recommendation.
>> > >>>>>
>> > >>>>> So there isn't a penalty incurred from increasing the max
region
>> > >>>>> size
>> > >>>>> from 256MB to 5GB?
>> > >>>>>
>> > >>>
>> > >>>
>> > >
>> > >
>> >
>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>
>

Mime
View raw message