hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: HBase Replication questions
Date Thu, 24 Mar 2011 18:19:34 GMT
Inline.

Also if you think any of my answers should be part of the
documentation, feel free to open a jira with a patch :)

J-D

On Thu, Mar 24, 2011 at 11:05 AM, Otis Gospodnetic
<otis_gospodnetic@yahoo.com> wrote:
> Hello,
>
> We are looking into HBase replication to separate our clients'-facing HBase
> cluster and the one we need to run analytics against (likely heavy MR jobs +
> potentially big scans).
>
> 1. How long does it take for edits to be propagated to a slave cluster?
>
> As far as I understand from HBase Replication page
> (http://hbase.apache.org/replication.html) there's a separate buffer held by
> each region server which accumulates data (edits which should be replicated from
> the edit log) before sending to Slave cluster's RSs. So basically data are sent
> to slave cluster when:
> * buffer is full. Is there an option to configure its size (as a way to affect
> flushing frequency)?
> * the end of edit log is reached by this "working thread". Does thread process
> the edit log periodically or is it watching for edit log to change and acts
> "immediately"? If the former, what is the default interval between runs? Can it
> be configured?

It acts as soon as the buffer is full or it reaches an EOF. The end of
the file is determined by when the file was reopened *because there's
no way to tail a file in HDFS without closing the reader, reopening
the file and seeking to a certain position*. The end result is that
replication cannot just fill for minutes before sending because it
gets the EOF pretty quickly. Our replication stream almost always have
sub-second lag. Only if it reaches the end and it didn't read anything
new that it will wait.

Configurations:
replication.source.size.capacity, default is 64MB but recently I saw
some OOMEs issue and I'm starting to think that this is too big.
replication.source.nb.capacity, default is 25k. The buffer is flushed
when either size or capacity is reached. I'm thinking of deleting this
second config because what's really important is the size.
replication.source.maxretriesmultiplier, default is 10, so it retries
up to 10 times with pauses that are currentIteration times
replication.source.sleepforretries. By default it sleeps 1 sec, 2, 3,
4... 9, 10, 10, 10, 10 until it's able to replicate
replication.source.sleepforretries, default is 1 second, see above.


>
> 2. How reliable is replication?
>
> It looks like when there are some networking issues and slave cluster can't be
> reached, this is handled gracefully by replication mechanism. It sounds like
> this should also cover slave cluster going down for some reason. Are there any
> possible scenarios when replication can be broken?

The biggest issue at the moment is (from the replication
documentation): HBASE-3130, the master cluster needs to be restarted
if its region servers lose their session with a slave cluster

Also reliability in general in 0.90 has went down a bit because we
were using 0.89 for a long time and just recently started using
0.90.1... there's still a few bugs I'm hunting.

>
> 3. Replication of existing (and possibly big) cluster after the fact.
>
> What are the options to replicate all existing data to a new (& empty) slave
> cluster if replication wasn't configured from the start and keep replicating
> from that point?  It seems that because edit logs on the master cluster get
> cleaned this might not be possible?

>From the FAQ at the end of the replication documentation:

Q. You need a bulk edit shipper? Something that allows you transfer
64MB of edits in one go?

A. You can use the HBase-provided utility called CopyTable from the
package org.apache.hadoop.hbase.mapreduce in order to have a
discp-like tool to bulk copy data.


But in 0.90 there's a bug with TableOutputFormat that prevents from
using CopyTable across clusters, HBASE-3497, for which I'm at this
very moment fixing and testing.

Mime
View raw message