hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jack Levin <magn...@gmail.com>
Subject Re: question about meta data query intensity
Date Wed, 24 Nov 2010 00:29:03 GMT
on 0.89 still...

On Tue, Nov 23, 2010 at 4:28 PM, Jack Levin <magnito@gmail.com> wrote:
> if I set it higher, say to 10 minutes, will there be an potential ill effects?
>
>  -Jack
>
> On Tue, Nov 23, 2010 at 4:24 PM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:
>> Jack, you didn't upgrade to 0.90 yet right? Then there's a master
>> background thread that scans .META. every minute... but with that
>> amount of rows it's probably best to set that much higher. The
>> config's name is hbase.master.meta.thread.rescanfrequency
>>
>> You should also take a look at your master log to see how long it's
>> taking to scan the whole thing currently. On one cluster here I have:
>>
>> 2010-11-23 16:22:21,307 INFO
>> org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner
>> scanning meta region {server: 10.10.21.40:60020, regionname:
>> .META.,,1.1028785192, startKey: <>}
>> 2010-11-23 16:22:25,129 INFO
>> org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner
>> scan of 7355 row(s) of meta region {server: 10.10.21.40:60020,
>> regionname: .META.,,1.1028785192, startKey: <>} complete
>>
>> Meaning that it took ~4 seconds to scan 7355 rows.
>>
>> J-D
>>
>> On Tue, Nov 23, 2010 at 4:15 PM, Jack Levin <magnito@gmail.com> wrote:
>>> its requests=6204 ... but we have not been loading cluster with
>>> queries at all.  I see that CPU is about 35% used vs other boxes at
>>> user cpu of 10% or so... So its really CPU load that worries me than
>>> the IO.
>>>
>>> -Jack
>>>
>>> On Tue, Nov 23, 2010 at 1:55 PM, Stack <stack@duboce.net> wrote:
>>>> On Tue, Nov 23, 2010 at 11:06 AM, Jack Levin <magnito@gmail.com> wrote:
>>>>> its REST, and generally no long lived clients, yes, caching of regions
>>>>> helps however, we expect long tail hits that will be uncached, which
>>>>> may stress out meta region, that being said, is it possible create
>>>>> affinity and nail meta region into a beefy server or set of beefy
>>>>> servers?
>>>>>
>>>>
>>>> The REST server should be caching region locations for you.
>>>>
>>>> On the .META. side, since its accessed so frequently, it should be
>>>> nailed into the block cache but if 1000 regions sitting beside that
>>>> .META. there could be contention.
>>>>
>>>> There is also hbase.client.prefetch.limit, the number of region
>>>> locations to fetch every time we do a lookup into .META. Currently its
>>>> set to 10.  You could try setting this down to 1?
>>>>
>>>> What are you seeing for request rates and load on the .META. hosting
>>>> regionserver?
>>>>
>>>> St.Ack
>>>>
>>>
>>
>

Mime
View raw message