hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Hbase read performance with increasing number of client threads
Date Fri, 20 Aug 2010 07:34:40 GMT
Thanks,

One of the major problems we are facing is the lack of IO pushdown.
We need to push the IO requests from HBase -> regionserver -> OS ->
disk IO.  The latter two do IO path optimization and this is where we
will see speedups.  There is also a chance to do IO path optimization
in the HDFS layer, by predicting or measuring actual IO load and
routing read requests to less loaded replica copies.

This requires us to push hundreds of reads/sec per node down into
datanode.  Right now we accomplish this by opening a socket for every
random read, which also uses a thread up on the datanode side.  Socket
and thread limits become an issue.

The bad part is solutions are pretty radical.  I suppose they'll show
up in various other places than the ASF Hadoop release first.

-ryan

On Fri, Aug 20, 2010 at 12:22 AM, Jeff Hammerbacher <hammer@cloudera.com> wrote:
> Great, thanks. Most critical HDFS features that have JIRAs against them get
> reviewed, in my experience, but they just take time. Symlinks (
> https://issues.apache.org/jira/browse/HDFS-245) and appends are two examples
> that come to mind.
>
> Gathering up public opinion about the importance of various features should
> help us lobby for resources for them to get fixed. I've started dumping a
> few at https://issues.apache.org/jira/browse/HBASE-2926.
>
> On Fri, Aug 20, 2010 at 12:11 AM, Ryan Rawson <ryanobjc@gmail.com> wrote:
>
>> I dug out these two issues:
>>
>> https://issues.apache.org/jira/browse/HDFS-918
>>
>> https://issues.apache.org/jira/browse/HDFS-1323
>>
>> There was also something about speeding up random reads in HDFS, but
>> as is typical these kinds of issues go to JIRA to die.
>>
>> -ryan
>>
>>
>> On Thu, Aug 19, 2010 at 11:51 PM, Jeff Hammerbacher <hammer@cloudera.com>
>> wrote:
>> > Hey Ryan,
>> >
>> > Could you point to the particular JIRA issues for the DFS client that are
>> > causing these performance issues for HBase? Knowing is half the battle.
>> >
>> > Thanks,
>> > Jeff
>> >
>> > On Thu, Aug 19, 2010 at 9:20 PM, Ryan Rawson <ryanobjc@gmail.com> wrote:
>> >
>> >> Due to DFS client things are a little not as good as they should be...
>> >> They are being worked on, so it will get resolved in time.
>> >>
>> >> In the mean time, the key to fast access is caching... ram ram ram.
>> >>
>> >> -ryan
>> >>
>> >> On Thu, Aug 19, 2010 at 10:15 AM, Abhijit Pol <apol@rocketfuelinc.com>
>> >> wrote:
>> >> > We are using Hbase 0.20.5 drop with latest cloudera Hadoop
>> distribution.
>> >> >
>> >> > - We are hitting 3 nodes Hbase cluster from a client which has 10
>> >> > threads each with thread local copy of HTable client object and
>> >> > established connection to server.
>> >> > - Each of 10 threads issuing 10,000 read requests of keys randomly
>> >> > selected from pool of 1000 keys. All keys are present on HBase and
>> >> > table is pinned in memory (to make sure we don't have any disk seeks).
>> >> > - If we run this test with 10 threads we get avg latency as seen by
>> >> > client = 8ms (excluding initial 10 connection setup time) . But if
we
>> >> > increase # threads to 100, 250 to 500 we get increasing latency
>> >> > numbers like 26ms, 51ms, 90ms.
>> >> > - We have enabled HBase metrics on RS and we see "get_avg_time" on
all
>> >> > RS between 5-15ms in all tests, consistently.
>> >> >
>> >> > Is this expected? Any tips to get consistent performance below 20ms?
>> >> >
>> >>
>> >
>>
>

Mime
View raw message