hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yu Li <car...@gmail.com>
Subject Re: how to optimize for heavy writes scenario
Date Sat, 18 Mar 2017 05:11:26 GMT
First please try out stack's suggestion, all good ones.

And some supplement: since all disks in use are HDD w/ normal IO
capability, it's important to control big IO rate like flush and
compaction. Try below features out:
1. HBASE-8329 <https://issues.apache.org/jira/browse/HBASE-8329>: Limit
compaction speed (available in 1.1.0+)
2. HBASE-14969 <https://issues.apache.org/jira/browse/HBASE-14969>: Add
throughput controller for flush (available in 1.3.0)
3. HBASE-10201 <https://issues.apache.org/jira/browse/HBASE-10201>: Per
column family flush (available in 1.1.0+)
    * HBASE-14906 <https://issues.apache.org/jira/browse/HBASE-14906>:
Improvements on FlushLargeStoresPolicy (only available in 2.0, not released
yet)

Also try out multiple WAL, we observed ~20% write perf boost in prod. See
more details in the doc attached in below JIRA:
- HBASE-14457 <https://issues.apache.org/jira/browse/HBASE-14457>: Umbrella:
Improve Multiple WAL for production usage

And please note that if you decided to pick up a branch-1.1 release, make
sure to use 1.1.3+, or you may hit some perf regression issue on writes,
see HBASE-14460 <https://issues.apache.org/jira/browse/HBASE-14460> for
more details.

Hope these information helps.

Best Regards,
Yu

On 18 March 2017 at 05:51, Vladimir Rodionov <vladrodionov@gmail.com> wrote:

> >> In my opinion,  1M/s input data will result in only  70MByte/s write
>
> Times 3 (default HDFS replication factor) Plus ...
>
> Do not forget about compaction read/write amplification. If you flush 10 MB
> and your max region size is 10 GB, with default min file to compact (3)
> your amplification is 6-7 That gives us 70 x 3 x 6 = 1260 MB/s read/write
> or 210 MB/sec read and writes (210 MB/s reads and 210 MB/sec writes)
>
> per RS
>
> This IO load is way above sustainable.
>
>
> -Vlad
>
>
> On Fri, Mar 17, 2017 at 2:14 PM, Kevin O'Dell <kevin@rocana.com> wrote:
>
> > Hey Hef,
> >
> >   What is the memstore size setting(how much heap is it allowed) that you
> > have on that cluster?  What is your region count per node?  Are you
> writing
> > evenly across all those regions or are only a few regions active per
> region
> > server at a time?  Can you paste your GC settings that you are currently
> > using?
> >
> > On Fri, Mar 17, 2017 at 3:30 PM, Stack <stack@duboce.net> wrote:
> >
> > > On Fri, Mar 17, 2017 at 9:31 AM, Hef <hef.online@gmail.com> wrote:
> > >
> > > > Hi group,
> > > > I'm using HBase to store large amount of time series data, the usage
> > case
> > > > is heavy on writes then reads. My application stops at writing 600k
> > > > requests per second and I can't tune up for better tps.
> > > >
> > > > Hardware:
> > > > I have 6 Region Servers, each has 128G memory, 12 HDDs, 2cores with
> > > > 24threads,
> > > >
> > > > Schema:
> > > > The schema for these time series data is similar as OpenTSDB that the
> > > data
> > > > points of a same metric within an hour are store in one row, and
> there
> > > > could be maximum 3600 columns per row.
> > > > The cell is about 70bytes on its size, including the rowkey, column
> > > > qualifier, column family and value.
> > > >
> > > > HBase config:
> > > > CDH 5.6 HBase 1.0.0
> > > >
> > >
> > > Can you upgrade? There's a big diff between 1.2 and 1.0.
> > >
> > >
> > > > 100G memory for each RegionServer
> > > > hbase.hstore.compactionThreshold = 50
> > > > hbase.hstore.blockingStoreFiles = 100
> > > > hbase.hregion.majorcompaction disable
> > > > hbase.client.write.buffer = 20MB
> > > > hbase.regionserver.handler.count = 100
> > > >
> > >
> > > Could try halving the handler count.
> > >
> > >
> > > > hbase.hregion.memstore.flush.size = 128MB
> > > >
> > > >
> > > > Why are you flushing? If it is because you are hitting this flush
> > limit,
> > > can you try upping it?
> > >
> > >
> > >
> > > > HBase Client:
> > > > write in BufferedMutator with 100000/batch
> > > >
> > > > Inputs Volumes:
> > > > The input data throughput is more than 2millions/sec from Kafka
> > > >
> > > >
> > > How is the distribution? Evenly over the keyspace?
> > >
> > >
> > > > My writer applications are distributed, how ever I scaled them up,
> the
> > > > total write throughput won't get larger than 600K/sec.
> > > >
> > >
> > >
> > > Tell us more about this scaling up? How many writers?
> > >
> > >
> > >
> > > > The severs have 20% CPU usage and 5.6 wa,
> > > >
> > >
> > > 5.6 is high enough. Is the i/o spread over the disks?
> > >
> > >
> > >
> > > > GC  doesn't look good though, it shows a lot 10s+.
> > > >
> > > >
> > > What settings do you have?
> > >
> > >
> > >
> > > > In my opinion,  1M/s input data will result in only  70MByte/s write
> > > > throughput to the cluster, which is quite a small amount compare to
> > the 6
> > > > region servers. The performance should not be bad like this.
> > > >
> > > > Is anybody has idea why the performance stops at 600K/s?
> > > > Is there anything I have to tune to increase the HBase write
> > throughput?
> > > >
> > >
> > >
> > > If you double the clients writing do you see an up in the throughput?
> > >
> > > If you thread dump the servers, can you tell where they are held up? Or
> > if
> > > they are doing any work at all relative?
> > >
> > > St.Ack
> > >
> >
> >
> >
> > --
> > Kevin O'Dell
> > Field Engineer
> > 850-496-1298 | Kevin@rocana.com
> > @kevinrodell
> > <http://www.rocana.com>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message