hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 牛兆捷 <nzjem...@gmail.com>
Subject Re: performance of block cache
Date Wed, 17 Sep 2014 02:23:36 GMT
[?] aha. Thanks~

2014-09-17 1:57 GMT+08:00 Nick Dimiduk <ndimiduk@gmail.com>:

> Replying to this thread is getting bounced as spam. Here's the reply I
> sent yesterday.
>
> On Mon, Sep 15, 2014 at 7:52 PM, Nick Dimiduk <ndimiduk@gmail.com> wrote:
>
>> The explicit JAVA_HOME requirement is new via HBASE-11534.
>>
>> On Mon, Sep 15, 2014 at 3:16 AM, 牛兆捷 <nzjemail@gmail.com> wrote:
>>
>>> It works now by configuring the $JAVA_HOME explicitly.
>>>
>>> The JAVA_HOME is configured as $JAVA_HOME by default. Now I configure it
>>> to
>>> the complete path of my JDK explicitly.
>>>
>>> A little strange here, the $JAVA_HOME is already set in the shell
>>> environment, why do I still need to configure it again explicitly...
>>>
>>> 2014-09-15 14:58 GMT+08:00 牛兆捷 <nzjemail@gmail.com>:
>>>
>>> > java -d64 version works well in the shell.
>>> >
>>> > 2014-09-15 11:59 GMT+08:00 牛兆捷 <nzjemail@gmail.com>:
>>> >
>>> >> I use hbase-0.98-5-hadoop2 and modify the default heap size of region
>>> >> server in hbase-env.sh as below (keep all the other parameters in the
>>> file
>>> >> default):
>>> >>
>>> >> export HBASE_REGIONSERVER_OPTS="-Xmn200m
>>> >> -XX:CMSInitiatingOccupancyFraction=70  -Xms1024m -Xmx8000m"
>>> >>
>>> >> The error occurs when I start hbase cluster:
>>> >>
>>> >> 10.1.255.246: Invalid maximum heap size: -Xmx8000m
>>> >> 10.1.255.246: The specified size exceeds the maximum representable
>>> size.
>>> >> 10.1.255.246: Could not create the Java virtual machine.
>>> >>
>>> >> The jvm I use is 64 bit :
>>> >>
>>> >> java version "1.6.0_39"
>>> >> Java(TM) SE Runtime Environment (build 1.6.0_39-b04)
>>> >> Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
>>> >>
>>> >> Why 8G setting exceeds the maximum representable size.
>>> >>
>>> >> 2014-09-15 11:39 GMT+08:00 Nick Dimiduk <ndimiduk@gmail.com>:
>>> >>
>>> >>> The scripts automate use of the tool PerformanceEvaluation that
ships
>>> >>> with
>>> >>> HBase, so in that way it runs against a cluster directly. It depends
>>> on
>>> >>> having independent configuration directories set up for each test
>>> >>> config. There's probably too much custom-to-my-environment stuff
in
>>> >>> there,
>>> >>> but I hope I included enough diffs that you can work it out in your
>>> >>> deployment. Let me know if you have any more questions.
>>> >>>
>>> >>> -n
>>> >>>
>>> >>> On Sunday, September 14, 2014, 牛兆捷 <nzjemail@gmail.com>
wrote:
>>> >>>
>>> >>> > Hi, Nick
>>> >>> >
>>> >>> > Can your perf_blockcache performance testing script can be
applied
>>> to
>>> >>> hbase
>>> >>> > cluster directly?
>>> >>> > If not, what kind of things should I take care?
>>> >>> >
>>> >>> > 2014-08-22 7:06 GMT+08:00 Nick Dimiduk <ndimiduk@gmail.com
>>> >>> <javascript:;>
>>> >>> > >:
>>> >>> >
>>> >>> > > I'm familiar with Stack's work too, but thanks for pointing
it
>>> out :)
>>> >>> > >
>>> >>> > > On Wed, Aug 20, 2014 at 8:19 PM, 牛兆捷 <nzjemail@gmail.com
>>> >>> <javascript:;>>
>>> >>> > wrote:
>>> >>> > >
>>> >>> > > > Hi Nick:
>>> >>> > > >
>>> >>> > > > Yes, I am interested in it. I will try first.
>>> >>> > > >
>>> >>> > > > Btw, this site (http://people.apache.org/~stack/bc/)
also
>>> does the
>>> >>> > > similar
>>> >>> > > > performance evaluation.
>>> >>> > > > You can have a look if you are interested in.
>>> >>> > > >
>>> >>> > > >
>>> >>> > > > 2014-08-21 1:48 GMT+08:00 Nick Dimiduk <ndimiduk@gmail.com
>>> >>> > <javascript:;>>:
>>> >>> > > >
>>> >>> > > > > Hi Zhaojie,
>>> >>> > > > >
>>> >>> > > > > I'm responsible for this particular bit of work.
One thing to
>>> >>> note in
>>> >>> > > > these
>>> >>> > > > > experiments is that I did not control explicitly
for OS
>>> caching.
>>> >>> I
>>> >>> > ran
>>> >>> > > > > "warmup" workloads before collecting measurements,
but
>>> because
>>> >>> the
>>> >>> > > amount
>>> >>> > > > > of RAM on the machine is fixed, it's impact
of OS cache is
>>> >>> different
>>> >>> > > with
>>> >>> > > > > different based on the amount of memory used
by HBase.
>>> Another,
>>> >>> as
>>> >>> > Todd
>>> >>> > > > > pointed out on an earlier thread, is that my
trend lines are
>>> >>> probably
>>> >>> > > > > optimistic/misleading.
>>> >>> > > > >
>>> >>> > > > > Something I was driving for was to understand
how well the
>>> >>> different
>>> >>> > > > > implementations before as they're managing more
and more
>>> memory.
>>> >>> I'd
>>> >>> > > like
>>> >>> > > > > to get some insight into how we might be able
to take
>>> advantage
>>> >>> of
>>> >>> > > 100's
>>> >>> > > > or
>>> >>> > > > > even 1000's of GB of memory when the time comes.
That's part
>>> of
>>> >>> why
>>> >>> > > > there's
>>> >>> > > > > so many variables.
>>> >>> > > > >
>>> >>> > > > > I scripted out the running of the tests, all
of my
>>> >>> configurations are
>>> >>> > > > > available in the associated github repo [0],
and all of the
>>> data
>>> >>> > points
>>> >>> > > > are
>>> >>> > > > > available as a csv. If you're interested in
experimenting
>>> >>> yourself,
>>> >>> > > > please
>>> >>> > > > > let me know how I can help.
>>> >>> > > > >
>>> >>> > > > > Cheers,
>>> >>> > > > > Nick
>>> >>> > > > >
>>> >>> > > > > [0]: https://github.com/ndimiduk/perf_blockcache
>>> >>> > > > >
>>> >>> > > > >
>>> >>> > > > > On Wed, Aug 20, 2014 at 6:00 AM, 牛兆捷 <nzjemail@gmail.com
>>> >>> > <javascript:;>> wrote:
>>> >>> > > > >
>>> >>> > > > > > the complete blog link is:
>>> >>> > > > > > http://zh.hortonworks.com/blog/blockcache-showdown-hbase/
>>> >>> > > > > >
>>> >>> > > > > >
>>> >>> > > > > > 2014-08-20 11:41 GMT+08:00 牛兆捷 <nzjemail@gmail.com
>>> >>> <javascript:;>
>>> >>> > >:
>>> >>> > > > > >
>>> >>> > > > > > > Hi all:
>>> >>> > > > > > >
>>> >>> > > > > > > I saw some interesting results from
Hortonworks blog
>>> (block
>>> >>> cache
>>> >>> > > > > > > <
>>> >>> > > > > >
>>> >>> > > > >
>>> >>> > > >
>>> >>> > >
>>> >>> >
>>> >>>
>>> http://zh.hortonworks.com/wp-content/uploads/2014/03/perfeval_blockcache_v2.pdf
>>> >>> > > > > > >
>>> >>> > > > > > > ).
>>> >>> > > > > > >
>>> >>> > > > > > > In this result, the ratio of memory
footprint to database
>>> >>> size is
>>> >>> > > > held
>>> >>> > > > > > > fixed while
>>> >>> > > > > > > the absolute values are increased.
>>> >>> > > > > > >
>>> >>> > > > > > > In my mind, the performance should
becomes worse for
>>> larger
>>> >>> ratio
>>> >>> > > as
>>> >>> > > > > the
>>> >>> > > > > > > increase
>>> >>> > > > > > > of the absolute value. For example
BucketCache#(tmpfs),
>>> the
>>> >>> > > > difference
>>> >>> > > > > > > between ratio (DB"1.5":"RAM"1.0) and
ratio
>>> (DB"4.5":"RAM"1.0)
>>> >>> > > becomes
>>> >>> > > > > > > larger as the increase of memory.
>>> >>> > > > > > > Actually, the result of ratio ( DB"1.5":"RAM"1.0)
>>> increase
>>> >>> > > linearly,
>>> >>> > > > > and
>>> >>> > > > > > > the result of ratio (DB"1.5":"RAM"1.0)
exponentially.
>>> >>> > > > > > >
>>> >>> > > > > > > However, for BucketCache#(heap) and
LruBlockCache, the
>>> >>> result is
>>> >>> > > out
>>> >>> > > > of
>>> >>> > > > > > my
>>> >>> > > > > > > expectation.
>>> >>> > > > > > > The curves of ratio (DB"1.5":"RAM"1.0)
and ratio
>>> >>> > (DB"4.5":"RAM"1.0)
>>> >>> > > > > both
>>> >>> > > > > > > increase exponentially, but the relative
differences as
>>> the
>>> >>> > > increase
>>> >>> > > > of
>>> >>> > > > > > > memory are not consistent.
>>> >>> > > > > > > Take LruBlockCache as an example,
the difference of ratio
>>> >>> > > > > > > (DB"1.5":"RAM"1.0) and ratio (DB"4.5":"RAM"1.0)
becomes
>>> >>> smaller
>>> >>> > > from
>>> >>> > > > 20
>>> >>> > > > > > GB
>>> >>> > > > > > > to 50 GB, but becomes larger from
50 GB to 60 GB.
>>> >>> > > > > > >
>>> >>> > > > > > > How to analysis the cause of this
result, any ideas?
>>> >>> > > > > > >
>>> >>> > > > > > > --
>>> >>> > > > > > > *Regards,*
>>> >>> > > > > > > *Zhaojie*
>>> >>> > > > > > >
>>> >>> > > > > > >
>>> >>> > > > > >
>>> >>> > > > > >
>>> >>> > > > > > --
>>> >>> > > > > > *Regards,*
>>> >>> > > > > > *Zhaojie*
>>> >>> > > > > >
>>> >>> > > > >
>>> >>> > > >
>>> >>> > > >
>>> >>> > > >
>>> >>> > > > --
>>> >>> > > > *Regards,*
>>> >>> > > > *Zhaojie*
>>> >>> > > >
>>> >>> > >
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> > --
>>> >>> > *Regards,*
>>> >>> > *Zhaojie*
>>> >>> >
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> *Regards,*
>>> >> *Zhaojie*
>>> >>
>>> >>
>>> >
>>> >
>>> > --
>>> > *Regards,*
>>> > *Zhaojie*
>>> >
>>> >
>>>
>>>
>>> --
>>> *Regards,*
>>> *Zhaojie*
>>>
>>
>>
>


-- 
*Regards,*
*Zhaojie*

Mime
  • Unnamed multipart/related (inline, None, 0 bytes)
View raw message