hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zhiyuan yang <sjtu....@gmail.com>
Subject Re: Random read operation about hundreds request per second
Date Sun, 09 Nov 2014 00:44:15 GMT
I've tried it, but it doesn't work. Could you please tell me what's the
typical rps in this kind of scenario? I've no idea about that.

On Fri, Nov 7, 2014 at 5:02 PM, Pere Kyle <pere@whisper.sh> wrote:

> I think it may be a thrift issue, have you tried playing with the
> connection queues?
> set hbase.thrift.maxQueuedRequests to 0
>
> From Varun Sharma:
> "If you are opening persistent connections (connections that never close),
> you
> should probably set the queue size to 0. Because those connections will
> anyways never get threads to serve them since the connections that go
> through first, will hog the thread pool"
> -Pere
>
> On Nov 7, 2014, at 1:56 PM, zhiyuan yang <sjtu.yzy@gmail.com> wrote:
>
> > I used hbase 0.94.18, hadoop 2.4.0 on AWS EMR m1.large with all default
> > heap size.
> >
> > On Thu, Nov 6, 2014 at 10:11 PM, Ted Yu <yuzhihong@gmail.com> wrote:
> >
> >> Can you provide a bit more information about your environment ?
> >>
> >> hbase release
> >> hadoop release
> >> hardware config
> >> heap size for the daemons
> >>
> >> Cheers
> >>
> >> On Thu, Nov 6, 2014 at 5:24 PM, zhiyuan yang <sjtu.yzy@gmail.com>
> wrote:
> >>
> >>> Hi,
> >>>
> >>> I'm new to hbase. Several days ago I built an web service with hbase as
> >>> backend.
> >>> However, when I used ab benchmark to test the performance of read only
> >>> wordload,
> >>> the result was only hundreds request per second even hbase cache hit
> >> ration
> >>> was 100%.
> >>>
> >>> The architecture of my system is as following. I use netty as web
> >> framework
> >>> and use
> >>> thrift to connect hbase. Netty handler uses connection pool to get the
> >>> thrift connection,
> >>> and send simple get query. Hbase is deployed on 1 master and 1 slave.
> The
> >>> thrift server
> >>> is opened on HMaster.
> >>>
> >>> I'm sure the problem doesn't lie in pooling, because if I just get
> >>> connection without
> >>> really using it in handler, and the result rps is several thousands.
> But
> >> I
> >>> don't know where
> >>> the bottleneck is exactly.
> >>>
> >>> Can anyone help me out? Really appreciate your help.
> >>>
> >>> --
> >>>
> >>> *Thank you && Best Regards,*
> >>>
> >>> *Zhiyuan Yang*
> >>>
> >>
> >
> >
> >
> > --
> >
> > *Thank you && Best Regards,*
> >
> > *Zhiyuan Yang*
> >
> > ---------------------------------------------------------------------
> >
> > Master of Computational Data Science
> >
> > School of Computer Science
> >
> > Carnegie Mellon University
> >
> > 5000 Forbes Ave, Pittsburgh, PA, 15213
> >
> > Phone: (+1) 412-708-3527
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message