hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas
Date Wed, 21 May 2014 18:51:44 GMT

    [ https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14005082#comment-14005082

stack commented on HBASE-10070:

bq. and hooking into Get and Scan, and defining several possible internal strategies on how
to send RPCs based on that ("primary timeout", "parallel", "parallel with delay" ) may be
we can define pluggable strategy on how to execute RPCs? Similar to HDFS FailoverProxyProvider,
which can be defined in the client's config.

[~mantonov] So, rather than have the client ask for level of 'consistency' in the API, instead,
the replica interaction would be set on client construction dependent on the plugin supplied?

In the API at the moment we have STRONG and TIMELINE (What happens if I ask for TIMELINE and
cluster is not deployed with read replicas?  Ignored?).  If we were to add QUORUM_STRONG,
are we thinking that a client should be able to choose amongst these options?  Will that fly?
 At the moment, as noted, we have amended Get and Scan.  We'll have to amend all ops if we
follow the path of HBASE-10513?

How hard to evolve from HBASE-10513 to [~mantonov] suggestion?

> HBase read high-availability using timeline-consistent region replicas
> ----------------------------------------------------------------------
>                 Key: HBASE-10070
>                 URL: https://issues.apache.org/jira/browse/HBASE-10070
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>         Attachments: HighAvailabilityDesignforreadsApachedoc.pdf
> In the present HBase architecture, it is hard, probably impossible, to satisfy constraints
like 99th percentile of the reads will be served under 10 ms. One of the major factors that
affects this is the MTTR for regions. There are three phases in the MTTR process - detection,
assignment, and recovery. Of these, the detection is usually the longest and is presently
in the order of 20-30 seconds. During this time, the clients would not be able to read the
region data.
> However, some clients will be better served if regions will be available for reads during
recovery for doing eventually consistent reads. This will help with satisfying low latency
guarantees for some class of applications which can work with stale reads.
> For improving read availability, we propose a replicated read-only region serving design,
also referred as secondary regions, or region shadows. Extending current model of a region
being opened for reads and writes in a single region server, the region will be also opened
for reading in region servers. The region server which hosts the region for reads and writes
(as in current case) will be declared as PRIMARY, while 0 or more region servers might be
hosting the region as SECONDARY. There may be more than one secondary (replica count >
> Will attach a design doc shortly which contains most of the details and some thoughts
about development approaches. Reviews are more than welcome. 
> We also have a proof of concept patch, which includes the master and regions server side
of changes. Client side changes will be coming soon as well. 

This message was sent by Atlassian JIRA

View raw message