hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vladimir Rodionov <vrodio...@carrieriq.com>
Subject RE: HBase load distribution vs. scan efficiency
Date Mon, 20 Jan 2014 01:10:46 GMT
Use HBase timestamp (version) as a natural time in your design and you will be fine.
Create Scan with a time range now(), now() -14days and HBase will take care of the rest
Each store file (HFile) has the most recent timestamp in its meta data and if this timestamp
is out of range , then this file will be excluded from a Scan operation...

But ...

You will need custom compaction in place.  I will outline briefly how to approach this problem.

First, RegionObserver provides necessary hooks into Region compactions - you will need to
figure out how to
overwrite the default compaction using these hooks.

Second, you compaction should output HFiles with no overlapping time ranges:

For example:

HFile 1 (contains all KVs for today)
HFile 2 (yesterday)
HFile 3 (day before yesterday)

etc



Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodionov@carrieriq.com

________________________________________
From: Bill Q [bill.q.hdp@gmail.com]
Sent: Sunday, January 19, 2014 4:02 PM
To: user@hbase.apache.org
Subject: Re: HBase load distribution vs. scan efficiency

Hi Amit,
Thanks for the reply.

If I understand your suggestion correctly, and assuming we have 100 region
servers, I would have to do 100 scans to merge reads if I want to pull any
data for a specific date. Is that correct? Is the 100 scans the most
efficient way to deal with this issue?

Any thoughts?

Many thanks.


Bill


On Sun, Jan 19, 2014 at 4:02 PM, Amit Sela <amits@infolinks.com> wrote:

> If you'll use bulk load to insert your data you could use the date as key
> prefix and choose the rest of the key in a way that will split each day
> evenly. You'll have X regions for Evey day >> 14X regions for the two weeks
> window.
> On Jan 19, 2014 8:39 PM, "Bill Q" <bill.q.hdp@gmail.com> wrote:
>
> > Hi,
> > I am designing a schema to host some large volume of data over HBase. We
> > collect daily trading data for some markets. And we run a moving window
> > analysis to make predictions based on a two weeks window.
> >
> > Since everybody is going to pull the latest two weeks data every day, if
> we
> > put the date in the lead positions of the Key, we will have some hot
> > regions. So, we can use bucketing (date to mode bucket number) approach
> to
> > deal with this situation. However, if we have 200 buckets, we need to run
> > 200 scans to extract all the data in the last two weeks.
> >
> > My questions are:
> > 1. What happens when each scan return the result? Will the scan result be
> > sent to a sink  like place that collects and concatenate all the scan
> > results?
> > 2. Why having 200 scans might be a bad thing compared to have only 10
> > scans?
> > 3. Any suggestions to the design?
> >
> > Many thanks.
> >
> >
> > Bill
> >
>

Confidentiality Notice:  The information contained in this message, including any attachments
hereto, may be confidential and is intended to be read only by the individual or entity to
whom this message is addressed. If the reader of this message is not the intended recipient
or an agent or designee of the intended recipient, please note that any review, use, disclosure
or distribution of this message or its attachments, in any form, is strictly prohibited. 
If you have received this message in error, please immediately notify the sender and/or Notifications@carrieriq.com
and delete or destroy any copy of this message and its attachments.

Mime
View raw message